(winter swimming, morning run)
In dynamic or interactive environments, a common challenge is to transform one configuration into another through a sequence of small, local modifications. Interesting settings range from robot motion planning, over games such as the $15$-puzzle or the Rubik’s cube, to the rotation of binary trees.
These reconfiguration processes raise many natural algorithmic questions: Can any two – or two given – configurations be transformed into each other? If so, it is of interest to bound the length of a shortest reconfiguration sequence between (any) two configurations, to compute the length of an optimal sequence, and to efficiently determine a short(est) sequence.
These questions can be nicely rephrased in terms of a so-called flip graph. The flip graph of a discrete reconfiguration problem has a vertex for each configuration and an edge between any two configurations that can be transformed into one another by a basic operation, called a flip. In terms of the flip graph, the questions then read as follows: Is the flip graph connected? What is the diameter of the flip graph? Can a shortest path (flip sequence) between two vertices of the flip graph be computed efficiently?
The possibly most famous flip graph is the flip graph of triangulations on convex point sets with edge exchange. This graph is the $1$-skeleton of the associahedron and corresponds to rotations of binary trees. For $n$-point sets in general position, the connectivity of the flip graph and its diameter of $\Theta(n^2)$ are classic results. Moreover, computing the flip distance is known to be $\mathsf{NP}$-hard and $\mathsf{APX}$-hard. For triangulations of convex point sets, the diameter of the flip graph is at most $2n−10$, which is tight for $n>13$. The computational complexity of computing the flip distance of two triangulations on a convex point set is a tantalizing open problem.
In this talk, we discuss several settings including the reconfiguration of non-crossing graphs on points sets (e.g. triangulations, spanning trees, ...), as well as the reconfiguration of geometric objects in polygonal domains.
Coordinating the motion of multiple agents in constrained environments is a fundamental challenge in robotics, motion planning, and scheduling. A motivating example involves $n$ robotic arms, each represented as a line segment. The objective is to rotate each arm to its vertical orientation, one at a time, without collisions. This scenario is an example of the more general $k\text{-C{\small OMPATIBLE}O{\small RDERING}}$ problem, where $n$ agents, each capable of $k$ state-changing actions, must transition to specific target states under constraints encoded as a set $\mathcal{G}$ of $k$ pairs of directed graphs.
We show that $k\text{-C{\small OMPATIBLE}O{\small RDERING}}$ is $\mathsf{NP}$-complete, even when $\mathcal{G}$ is planar, degenerate, or acyclic. On the positive side, we provide polynomial-time algorithms for cases such as when $k = 1$ or $\mathcal{G}$ has bounded treewidth. We also introduce generalized variants supporting multiple state-changing actions per agent. These results extend to a wide range of scheduling, reconfiguration, and motion planning applications.
This paper investigates the problem of reconstructing a sequence from its Range Minimum Query results. We explore two types of RMQs: value-based RMQ, which returns the minimum value, and index-based RMQ, which returns the index of the minimum. We address several problems for both types, including finding the lexicographically smallest consistent sequence or permutation, and enumerating or counting all possible consistent permutations.
In this paper, we study the problems of Abelian group isomorphism and basis construction in two models. In the partially specified model (PS-model), the algorithm does not know the group size but can access randomly chosen elements of the group along with the Cayley table of those elements, which provides the result of the binary operation for every pair of selected elements. In the stronger fully specified model (FS-model), the algorithm knows the size of the group and has access to its elements and Cayley table.
Given two Abelian groups, $G$, and $H$, we present an algorithm in the PS-model (and hence in the FS-model) that runs in time $\tilde O(\sqrt{|G|})$ and decides if they are isomorphic. This improves on Kavitha's linear-time algorithm and gives the first sublinear-time solution for this problem. We then prove the lower bound $\Omega(|G|^{1/4})$ for the FS-model and the tight bound $\Omega(\sqrt{|G|})$ for the PS-model. This is the first known lower bound for this problem. We obtain similar results for finding a basis for Abelian groups. For deterministic algorithms, a simple $\Omega(|G|)$ lower bound is given.
This paper investigates the reconfiguration variant of the Constraint Satisfaction Problem (CSP), referred to as the Reconfiguration CSP (RCSP). Given a CSP instance and two of its solutions, RCSP asks whether one solution can be transformed into the other via a sequence of intermediate solutions, each differing by the assignment of a single variable. RCSP has attracted growing interest in theoretical computer science, and when the variable domain is Boolean, the computational complexity of RCSP exhibits a dichotomy depending on the allowed constraint types. A notable special case is the reconfiguration of graph homomorphisms—also known as graph recoloring—which has been studied using topological methods. We propose a novel algebraic approach to RCSP, inspired by techniques used in classical CSP complexity analysis. Unlike traditional methods based on total operations, our framework employs partial operations to capture a reduction involving equality constraints. This perspective also facilitates the extension of complexity results from Boolean domains to more general settings, demonstrating the versatility of partial operations in identifying tractable RCSP instances. Moreover, it opens new avenues for exploring the structural principles that underlie reconfiguration problems across diverse domains.
We present algorithms for computing invariants of K-theoretic persistent homology, extending beyond additive summaries in persistent homology in computational geometry. Building on fundamental theorems (i.e., interval calculus, Betti–exterior identity, and integral formula), we provide algorithms for compressed representations, output-sensitive enumerations, and fast statistics for concurrency layers. We establish complexity results and demonstrate practical advantages on graph filtrations.
A word is said to be bordered if it contains a nonempty proper prefix that is also a suffix. A pair of words $(u, v)$ is said to be mutually bordered if there exists a word that is a nonempty proper prefix of $u$ and suffix of $v$, and there exists a word that is a nonempty proper suffix of $u$ and prefix of $v$. Recently, Gabric studied the number of mutually bordered pairs. In this work, we extend the concept of mutually bordered pairs to abelian setting, and determine the number of mutually abelian-bordered pairs of binary words using lattice paths. We also find the number of unbordered pairs in this context.
A simple graph is called a word-representable graph if there is a word over its vertex set such that any two vertices are adjacent in the graph if and only if they alternate in the word; such a word is called a word-representant of the graph. Srinivasan and Hariharasubramanian proved that a minimum length word-representant for a non-complete triangle-free circle graph on $n$ vertices with at least one edge is of $2n-2$ length. Further, they posed an open problem to find classes of word-representable graphs whose minimum length word-representants are of $2n-k$ length, where $n$ is the number of vertices and $k$ is the clique number of a graph.
A graph is called a treelike comparability graph if it admits a transitive orientation such that the transitive reduction is a tree. When such a graph is also a permutation graph, it is called a treelike permutation graph. Recently, a subclass of treelike permutation graphs, viz., the class of double-arborescences, was established to be the first example satisfying the criterion given in the above-mentioned open problem. In this work, we devise a polynomial-time algorithm to construct a minimum length word-representant for a given treelike permutation graph and show that all treelike permutation graphs satisfy the criterion of the open problem.
We study the notion of full-compatibility: given two one-dimensional subshifts $H$ and $V$, is there a two-dimensional subshift $X$ such that its set of rows is exactly $H$ and its set of columns is exactly $V$? We show that this problem is decidable when both $X$ and $Y$ are nearest-neighbor SFTs but undecidable when at least one is allowed to be of slightly higher complexity. We also prove that the problem is undecidable for three nearest-neighbor SFTs combined in a three-dimensional subshift.
We introduce a new representation of the palindromic structure of a string that achieves asymptotically optimal space while supporting optimal retrieval queries. The approach relies on combinatorial properties of palindromes, yielding a compact yet efficient structure. This work fits within the broader study of compressed text indexing and highlights structural aspects of palindromic substrings that may prove useful in further algorithmic applications.
We show that every two-way deterministic finite automaton (2DFA) that solves one-way liveness on height $h$ has $\Omega(h^2)$ states. This implies a quadratic lower bound for converting one-way nondeterministic finite automata to 2DFAs, which asymptotically matches Chrobak’s well-known lower bound for this conversion on unary languages. In contrast to Chrobak’s simple proof, which relies on a 2DFA’s inability to differentiate between any two sufficiently distant locations in a unary input, our argument works on alphabets of arbitrary size and is structured around a main lemma that is general enough to potentially be reused elsewhere.
The planted clique problem is a classical problem in average-case complexity, which studies efficient algorithms that succeed on typical inputs drawn from natural distributions. An instance is generated by sampling an Erdős–Rényi random graph on $n$ vertices, where each edge appears independently with probability $1/2$, then selecting a random set $S$ of $k$ vertices and adding all edges among them — planting a clique. Given such a graph, the goal is to recover the set $S$.
By analyzing the expected number of $k$-cliques in an Erdős–Rényi graph, one finds that such graphs are unlikely to contain cliques much larger than $2 \log n$. Similarly, when $k \gg 2 \log n$, the planted clique is typically the unique largest clique, and an inefficient algorithm can recover $S$ with high probability. Indeed, enumerating all candidate sets yields a quasi-polynomial-time algorithm running in $n^{O(\log n)}$, showing that the planted clique is information-theoretically identifiable in this regime.
This motivates the question of when a polynomial-time algorithm exists. For $k \ge \Omega(\sqrt{n} \log n)$, the clique can be found by inspecting vertex degrees. A celebrated result of Alon, Krivelevich, and Sudakov shows that even for $k = \Theta(\sqrt{n})$, the clique can be recovered using spectral methods based on the adjacency matrix. The famous planted clique conjecture posits an information-theoretic/computational gap: for $k$ between $2 \log n$ and $o(\sqrt{n})$, no polynomial-time algorithm can recover the planted clique with constant probability, despite the existence of inefficient algorithms that can do so.
Enumeration Kernelization was first proposed by Creignou et al. [TOCS 2017] and was later refined by Golovach et al. [JCSS 2022, STACS 2021] into two different variants: fully-polynomial enumeration kernelization and polynomial-delay enumeration kernelization. A central problem in the literature on this topic is the MATCHING CUT. In this paper, we consider the more general NP-complete $d$-CUT problem. The decision version of $d$-CUT asks if a given undirected graph $G$ has a cut $(A, B)$ such that every $u \in A$ has at most $d$ neighbors in $B$ and every $v \in B$ has at most $d$ neighbors in $A$. In this paper, we study three different enumeration variants of $d$-CUT — ENUM $d$-CUT, ENUM MIN $d$-CUT, and ENUM MAX $d$-CUT, in which one aims to enumerate all $d$-cuts, all minimal $d$-cuts and all maximal $d$-cuts, respectively. We consider various structural parameters of the input graph, such as its vertex cover number, and neighborhood diversity. When vertex cover number and neighborhood diversity are considered as parameters, we provide a fully-polynomial enumeration kernelization of polynomial size for ENUM MIN $d$-CUT, and polynomial-delay enumerartion kernelizations for ENUM $d$-CUT and ENUM MAX $d$-CUT. Finally, when we consider the clique partition number as the parameter, we provide fully-polynomial enumeration kernelization, in-fact bijective enumeration kernelization for each of these three problems.
For integers $k,g,d$, a $(k;g,d)$-cage (or simply girth-diameter cage) is a smallest $k$-regular graph of girth $g$ and diameter $d$ (if it exists). We determine asymptotic lower and upper bounds for the ratio between the order and the diameter of girth-diameter cages as the diameter goes to infinity. We also prove that this ratio can be computed in constant time for fixed $k$ and $g$. We theoretically determine the exact values $n(3;g,d)$, and count the number of corresponding girth-diameter cages, for $g \in \{4,5\}$. Moreover, we design and implement an exhaustive graph generation algorithm and use it to determine the exact order of several open cases and obtain — often exhaustive — sets of the corresponding girth-diameter cages. The largest case we generated and settled with our algorithm is a $(3;7,35)$-cage of order $136$.
Although Extension Perfect Roman Domination is NP-complete, all minimal (with respect to the pointwise order) perfect Roman dominating functions can be enumerated with polynomial delay. For this algorithm, we have used a bijection between minimal perfect Roman dominating functions and Roman dominating functions and the fact that all minimal Roman dominating functions can be enumerated with polynomial delay. This bijection considers the set of vertices with value $2$ under the functions. In this paper, we will generalize this idea by defining so called nice Roman Domination properties for which we can employ this method. With this idea, we can show that all minimal maximal Roman Dominating functions can be enumerated with polynomial delay in $O(1.9332^n)$ time. Furthermore, we prove that enumerating all minimal connected Roman dominating functions on cobipartite or interval graphs can be achieved with polynomial delay. We show some downsides to this method as well.
The $3$-admissibility of a graph is a promising measure to identify real-world networks which have an algorithmically favourable structure.
We design an algorithm which decides whether the $3$-admissibility of an input graph $G$ is at most $p$ in time $O(m p^7)$ and space $O(n p^3)$, to the best of our knowledge, the first explicit algorithm to compute the $3$-admissibility.
The linear dependence on the input size in both time and space complexity, coupled with an `optimistic' design philosophy for the algorithm itself, makes this indeed practicable, as we demonstrate with an experimental evaluation on a corpus of $217$ real-world networks.
Our experimental results show, surprisingly, that the $3$-admissibility of most real-world networks is not much larger than the $2$-admissibility, despite the former having better algorithmic properties than the latter.
In a phylogenetic tree, present-day species are leaves and an edge from $u$ to $v$ indicates that $u$ is an ancestor of $v$. Weights on these edges indicate the phylogenetic distance. The phylogenetic diversity (PD) of a set of species $A$ is the total weight of edges that are on any path between the root of the phylogenetic tree and a species in $A$. Selecting a small set of species that maximizes phylogenetic diversity for a given phylogenetic tree is an essential task in preservation planning, where limited resources naturally prevent saving all species. An optimal solution can be found with a greedy algorithm [Steel, Systematic Biology, 2005; Pardi and Goldman, PLoS Genetics, 2005]. However, when a food web representing predator-prey relationships is given, finding a set of species that optimizes phylogenetic diversity subject to the condition that each saved species should be able to find food among the preserved species is NP-hard [Spillner et al., IEEE/ACM, 2008]. We present a generalization of this problem, where, inspired by biological considerations, the food web has weighted edges to represent the importance of predator-prey relationships. We show that this version is NP-hard even when both structures, the food web and the phylogenetic tree, are stars. To cope with this intractability, we proceed in two directions. Firstly, we study special cases where a species can only survive if a given fraction of its prey is preserved. Secondly, we analyze these problems through the lens of parameterized complexity. Our results include that finding a solution is fixed-parameter tractable with respect to the vertex cover number of the food web, assuming the phylogenetic tree is a star.
In this paper, we study the dominating set problem in RDV graphs, i.e., vertex-intersection graphs of downward paths in a rooted tree. It was shown in a previous paper that adjacency queries in an RDV graph can be reduced to the question whether a horizontal segment intersects a vertical segment. This was then used to find a maximum matching in an RDV graph efficiently, using priority search trees. In this paper, we show that if additionally we also use a ray shooting data structure, we can also find a minimum dominating set in an RDV graph $O(n \log n)$ time (presuming a linear-sized representation of the graph is given), i.e., without even looking at all edges.
Previous work on parallel quantum algorithms has focused on minimizing the length of the critical path of the circuit (i.e., query span), motivated by the small decoherence times of qubits. In this paper, we study similar settings where queries to a quantum circuit can be made in parallel in each timestep. Rather than focus exclusively on query span, however, we are interested in also minimizing the total number of queries made across all parallel processes (i.e., query work). We give parallel quantum algorithms that solve the maxima set and convex hull problems for $n$ lexicographically sorted points in the plane in $\tilde{O}(\sqrt{n})$ query span and $\tilde{O}(\sqrt{nh})$ query work, where $h$ is the size of the output. These results therefore resolve a natural question of whether we can use quantum computers in parallel to improve the depth of a quantum algorithm without requiring more work than a sequential algorithm. Also, note that our work bounds are sublinear when $h$ is $O(n^{1-\epsilon})$, for a small constant, $\epsilon>0$.
Given a set of $n$ colored points $P \subset \mathbb{R}^d$ we wish to store $P$ such that, given some query region $Q$, we can efficiently report the colors of the points appearing in the query region, along with their frequencies. This is the color frequency reporting problem. We study the case where query regions $Q$ are axis-aligned boxes or dominance ranges. If $Q$ contains $k$ colors, the main goal is to achieve ``strictly output sensitive'' query time $O(f(n) + k)$. We show that there exists a simple $O(ns\log_s n)$ size data structure for points in $\mathbb{R}^2$ that allows frequency reporting queries in $O(\log n + k\log_s n)$ time, where $s \in 2\ldots n$ is some parameter. Furthermore, we give a lower bound for the weighted version of the problem in the arithmetic model, proving that with $O(m)$ space one can not achieve query times better than $\Omega\left(\phi \frac{\log (n /\operatorname{col})}{\log (2m / n)}\right)$, where $\phi$ is the number of possible colors. This means our data structure is near-optimal. We extend these results to higher dimensions as well. Finally, we give an $O(n^{1+\epsilon} + m \log n + K)$ time algorithm that can answer $m$ dominance queries $\mathbb{R}^2$ with total output complexity $K$, while using only linear working space.
The capacitated vehicle routing problem (c-VRP), also known as the k-tour cover problem, has been widely studied in many settings. In the Euclidean plane, a PTAS for constant capacity c was given more than 40 years ago, which has been improved and extended ever since. Here, we present arguably the easiest form of a PTAS possible. The algorithm simply chooses between greedy or complete enumeration. A direct corollary of this simplification is that the PTAS extends to basically any kind of order restriction, as long as the objective is to minimize total tour length.
Furthermore, we show that when the metric is a tree, there is a clear separation in complexity between the c-VRP with and without order restrictions. We show that even for a fixed order, the c-VRP is APX-hard for arbitrary (non-constant) capacity c. This in contrast to the polynomial time approximation scheme for the standard c-VRP on trees.
In the paper, we consider the problem of searching for the Largest empty rectangle in a 2D map, and the one-dimensional version of the problem is the problem of searching for the largest empty segment. We present a quantum algorithm for the Largest Empty Square problem and the Largest Empty Rectangle of a fixed width $d$ for $n\times n$-rectangular map. Query complexity of the algorithm is $\tilde{O}(n^{1.5})$, and $\tilde{O}(n\sqrt{d})$, respectively. At the same time, the lower bounds for the classical case are $\Omega(n^2)$, and $\Omega(nd)$, respectively. The Quantum algorithm for the one-dimensional version of the problem has $O(\sqrt{n}\log n\log\log n)$ query complexity. The quantum lower bound for the problem is $\Omega(\sqrt{n})$ which is almost equal to the upper bound up to a log factor. The classical lower bound is $\Omega(n)$. So, we obtain the quadratic speed-up for the problem.
Powerful insights arise from linking two fields of study previously thought separate. Examples include Descartes’s coordinates, which links geometry to algebra, Planck’s Quantum Theory, which links particles to waves, and Shannon’s Information Theory, which links thermodynamics to communication. Such a synthesis is offered by the principle of Propositions as Types, which links logic to computation. At first sight it appears to be a simple coincidence—almost a pun—but it turns out to be remarkably robust, inspiring the design of automated proof assistants and programming languages, and continuing to influence the forefronts of computing.
Propositions as Types is a notion with many names and many origins. It is closely related to the BHK Interpretation, a view of logic developed by the intuitionists Brouwer, Heyting, and Kolmogorov in the 1930s. It is often referred to as the Curry-Howard Isomorphism, referring to a correspondence observed by Curry in 1934 and refined by Howard in 1969. Others draw attention to significant contributions from de Bruijn’s Automath and Martin-Löf’s Type Theory in the 1970s.
Propositions as Types is a notion with depth. It describes a correspondence between a given logic and a given programming language. At the surface, it says that for each proposition in the logic there is a corresponding type in the programming language—and vice versa. Thus we have
propositions as types.
It goes deeper, in that for each proof of a given proposition, there is a program of the corresponding type—and vice versa. Thus we also have
proofs as programs.
And it goes deeper still, in that for each way to simplify a proof there is a corresponding way to evaluate a program—and vice versa. Thus we further have
simplification of proofs as evaluation of programs.
Hence, we have not merely a shallow bijection between propositions and types, but a true isomorphism preserving the deep structure of proofs and programs, simplification and evaluation.
Propositions as Types is a notion with breadth. It applies to a range of logics including propositional, predicate, second-order, intuitionistic, classical, modal, and linear. It underpins the foundations of functional programming, explaining features including functions, records, variants, parametric polymorphism, data abstraction, continuations, linear types, and session types. It has inspired automated proof assistants and programming languages including Agda, Automath, Coq, Epigram, F#, F⋆, Haskell, Lean, LF, ML, NuPRL, Scala, Singularity, and Trellys.
Propositions as Types is a notion with mystery. Why should it be the case that intuitionistic natural deduction, as developed by Gentzen in the 1930s, and simply-typed lambda calculus, as developed by Church around the same time for an unrelated purpose, should be discovered thirty years later to be essentially identical? And why should it be the case that the same correspondence arises again and again? The logician Girard and the computer scientist Reynolds independently developed the same calculus, now dubbed Girard-Reynolds. The logician Hindley and the computer scientist Milner independently developed the same type system, now dubbed Hindley-Milner. Curry-Howard is a double-barrelled name that ensures the existence of other double-barrelled names. Those of us that design and use programming languages may often feel they are arbitrary, but Propositions as Types assures us some aspects of programming are absolute.
The talk is based on a paper that appeared in Communications of the ACM, and the abstract above is taken from that paper.
We introduce a two-way connection between FO transductions (first-order logical transformations) of planar graphs and a certain variant of fan-crossing (fan-planar) drawings of graphs which for bounded-degree graphs essentially reduces to being k-planar for fixed k. For graph classes, this connection allows us to derive non-transducibility results from nonexistence of the said drawings and, conversely, from nonexistence of a transduction to derive nonexistence of the said drawings. For example, the class of 3D-grids is not k-planar for any fixed k. We hope that this connection will help to draw a path to a possible proof that not all toroidal graphs are transducible from planar graphs.
Our characterization can be extended to any fixed surface instead of the plane. The result is based on a very recent characterization of weakly sparse FO transductions of classes of bounded expansion by [Gajarsky, Gladkowski, Jedelsky, Pilipczuk and Torunczyk, arXiv:2505.15655].
In a recent paper, Francis, Illickan, Jose and Rajendraprasad showed that every $n$-vertex plane graph $G$ has (under some natural restrictions) a vertex-partition into two sets $V_1$ and $V_2$ such that each $V_i$ is dominating (every vertex of $G$ contains a vertex of $V_i$ in its closed neighbourhood) and face-hitting (every face of $G$ is incident to a vertex of $V_i$). Their proof works by considering a super-graph $G'$ of $G$ that has certain properties, and among all such graphs, taking one that has the fewest edges. As such, their proof is not algorithmic. Their proof also relies on the $4$-color theorem, for which a quadratic-time algorithm exists, but it would not be easy to implement.
In this paper, we give a new proof that every $n$-vertex plane graph $G$ has (under the same restrictions) a vertex-partition into two dominating face-hitting sets. Our proof is constructive, and requires nothing more complicated than splitting a graph into $2$-connected components, finding an ear decomposition, and computing a perfect matching in a $3$-regular plane graph. For all these problems, linear-time algorithms are known and so we can find the vertex-partition in linear time.
(winter swimming, morning run)
The Steiner tree problem is one of the most prominent problems in network design. Given an edge-weighted undirected graph and a subset of the vertices, called terminals,the task is to compute a minimum-weight tree containing all terminals (and possibly further vertices). The best-known approximation algorithms for Steiner tree involve enumeration of a (polynomial but) very large number of candidate components and are therefore slow in practice.
A promising ingredient for the design of fast and accurate approximation algorithms for Steiner tree is the bidirected cut relaxation (BCR): bidirect all edges, choose an arbitrary terminal as a root, and enforce that each cut containing some terminal but not the root has one unit of fractional edges leaving it. BCR is known to be integral in the spanning tree case, i.e., when all the vertices are terminals. For general instances, however, it was not even known whether the integrality gap of BCR is better than the integrality gap of the natural undirected relaxation, which is exactly $2$. We resolve this question by proving an upper bound of $1.9988$ on the integrality gap of BCR.
In the second part of the talk I discuss an LP relaxation for Steiner Forest that generalizes BCR for Steiner Tree. We prove that this relaxation has several promising properties. Among them, it is possible to round any half-integral LP solution to a Steiner Forest instance while increasing the cost by at most a factor $16/9$. To prove this result we introduce a novel recursive densest-subgraph contraction algorithm.
The talk is based on joint works with Fabrizio Grandoni and Vera Traub.
The Exact Matching (EM) problem asks whether there exists a perfect matching which uses a prescribed number of red edges in a red/blue edge-colored graph. While there exists a randomized polynomial-time algorithm for the problem, only some special cases admit a deterministic one so far, making it a natural candidate for testing the P=RP hypothesis. A polynomial-time equivalent problem, Top-k Perfect Matching (TkPM), asks for a perfect matching maximizing the weight of the k heaviest edges.
We study the above problems, mainly the latter, in the scenario where the input is a blown-up graph, meaning a graph which had its vertices replaced by cliques or independent sets. We describe an FPT algorithm for TkPM parameterized by k and the neighborhood diversity of the input graph, which is essentially the size of the graph before the blow-up; this graph is also called the prototype. We extend this algorithm into an approximation scheme with a much softer dependency on the aforementioned parameters, time-complexity wise. Moreover, for prototypes with bounded bandwidth but unbounded size, we develop a recursive algorithm that runs in subexponential time. Utilizing another algorithm for EM on bounded neighborhood diversity graphs, we adapt this recursive subexponential algorithm to EM.
Our approach is similar to the use of dynamic programming on e.g. bounded treewidth instances for various problems. The main point is that the existence of many disjoint separators is utilized to avoid including in the separator any of a set of bad vertices during the split phase.
We study the problem of transforming bipartite graphs into bicluster graphs. Abu-Khzam, Isenmann, and Merchad [IWOCA '25] introduced two variants of this problem. In both problems, the goal is to transform a bipartite graph into a bicluster graph with at most $k$ operations, where the allowed operations are inserting an edge, deleting an edge, and splitting a vertex. Splitting a vertex $v$ refers to replacing $v$ by two new vertices whose combined neighborhood equals the neighborhood of $v$. The latter models overlapping clusters, that is, vertices belonging to multiple clusters, and is motivated by several real-world applications. The versions differ in that one variant allows splitting any vertex, while the second variant only allows vertex splits on one side of the bipartition.
Regarding computational complexity, they showed APX-hardness for both variants and a polynomial kernel (with $O(k^5)$ vertices) for the one-sided variant. They asked as open problems whether the polynomial kernel can be improved and whether it can also be extended for the other variant. We answer both questions in the affirmative and give kernels with $O(k^2)$ vertices for both variants. We also show that both problems can be solved in $O(k^{11k} + n + m)$ time, where $n$ and $m$ denote the number of vertices and edges in the input graph, respectively.
A homeomorphically irreducible spanning tree (HIST) is a spanning tree with no degree-$2$ vertices, serving as a structurally minimal backbone of a graph. While the existence of HISTs has been widely studied from a structural perspective, the algorithmic complexity of finding them remains less understood. In this paper, we provide a comprehensive investigation of the HIST problem from both structural and algorithmic viewpoints. We present a simple characterization that precisely describes which chordal graphs of diameter at most $3$ admit a HIST, leading to a polynomial-time decision procedure for this class. In contrast, we show that the problem is NP-complete for strongly chordal graphs of diameter $4$. From the perspective of parameterized complexity, we establish that the HIST problem is W[1]-hard when parameterized by clique-width, indicating that the problem is unlikely to be efficiently solvable in general dense graphs. On the other hand, we present fixed-parameter tractable (FPT) algorithms when parameterized by treewidth, modular-width, or cluster vertex deletion number. Specifically, we develop an $O^*(4^{k})$-time algorithm parameterized by modular-width k, and an FPT algorithm parameterized by the cluster vertex deletion number based on kernelization techniques that bound clique sizes while preserving the existence of a HIST. These results together provide a clearer understanding of the structural and computational boundaries of the HIST problem.
A set $D$ of vertices of a graph is a defensive alliance if, for each element of $D$, the majority of its neighbors are in $D$. We consider the notion of local minimality in this paper. A defensive alliance $D$ is called a locally minimal defensive alliance if removing any vertex $v$ from $D$ destroys the defensive alliance property, i.e., $D \setminus \{v\}$ is no longer a defensive alliance (see Bazgan et al., 2019). Given an undirected graph $G = (V, E)$ and an integer $k$, we study the Locally Minimal Defensive Alliance problem, where the goal is to check whether $G$ has a locally minimal defensive alliance of size at least $k$. This problem is known to be NP-hard, but its parameterized complexity remains open until now. We enhance our understanding of the problem from the viewpoint of parameterized complexity by showing that (1) the problem admits a fixed-parameter tractable (FPT) algorithm on general graphs when parameterized by the solution size $k$, and (2) we also present a subexponential algorithm on planar graphs of minimum degree at least two using the bidimensionality framework.
Phylogenetic networks allow modeling reticulate evolution, capturing events such as hybridization and horizontal gene transfer. A fundamental computational problem in this context is the Tree Containment problem, which asks whether a given phylogenetic network is compatible with a given phylogenetic tree. However, the classical statement of the problem is not robust to poorly supported branches in biological data, possibly leading to false negatives. In an effort to address this, a relaxed version that accounts for uncertainty, called Soft Tree Containment, has been introduced by Bentert, Malík, and Weller [SWAT'18]. We present an algorithm that solves Soft Tree Containment in $2^{O(\Delta_T \cdot k \cdot \log(k))} \cdot n^{O(1)}$ time, where $k := sw(\Gamma) + \Delta_N$, with $\Delta_T$ and $\Delta_N$ denoting the maximum out-degrees in the tree and the network, respectively, and $sw(\Gamma)$ denoting the "scanwidth" [Berry, Scornavacca, and Weller, SOFSEM'20] of a given tree extension of the network, while $n$ is the input size. Our approach leverages the fact that phylogenetic networks encountered in practice often exhibit low scanwidth, making the problem more tractable.
We rigorously measure the reverse mathematical strength of fundamental analytic principles for two neural architectures, Transformer and MLP (MultiLayer Perceptron). We prove that the standard compactness principle for Transformer is equivalent (over RCA) to ACA. For ReLU MLPs on compact domains, we prove that the standard universal approximation theorem for MLPs (with rational parameters) is equivalent to WKL. These results mean that the transformer compactness strictly exceeds the MLP universality in the reverse mathematical hierarchy. This paves the way for a systematic program of reverse mathematical classification of neural networks, bridging proof theory, computable analysis, and machine learning in a novel, interdisciplinary manner.
We investigate the online unweighted bipartite matching problem in the random arrival order model, with $n$ offline and $n$ online vertices, in the learning-augmented setting: The algorithm is provided with untrusted predictions of the types (neighborhoods) of the online vertices. We build upon the work of Choo et al. (ICML 2024, pp. 8762-8781) who proposed an approach that uses a prefix of the arrival sequence as a sample to determine whether the predictions are close to the true arrival sequence and then either follows the predictions or uses a known baseline algorithm that ignores the predictions and is $\beta$-competitive. Their analysis is limited to the case that the optimal matching has size $n$, i.e., every online vertex can be matched. We generalize their approach and analysis by removing any assumptions on the size of the optimal matching while only requiring that the size of the predicted matching is at least $\alpha n$ for any constant $0 \lt \alpha \le 1$. Our learning-augmented algorithm achieves $(1-o(1))$-consistency and $(\beta-o(1))$-robustness. Additionally, we show that the competitive ratio degrades smoothly between consistency and robustness with increasing prediction error.
A king in an $n$-vertex tournament graph $G$ is a vertex that can reach any other vertex $v$ with a path of length at most two. A kingdom is a data structure that given any vertex $v$ returns such a path from a king to $v$ in $O(1)$ time. In this paper, we show how to maintain a kingdom while the tournament graph $G$ undergoes updates. We consider both edge updates that flip the direction of an edge, and vertex updates that insert/delete vertices by activating/deactivating rows and columns of the graph's adjacency matrix.
For a single edge-flip, we show that after $O(n^{1.5})$ preprocessing time, we can maintain a kingdom in $O(1)$ time following the edge-flip. With $O(n^2)$ preprocessing time, we can support any constant number of edge-flips, vertex insertions, and vertex deletions in $O(\log n)$ time per operation. For an arbitrary number of edge-flips, we present a randomized algorithm that maintains a kingdom in $O(\log n)$ expected time following every edge-flip, and another algorithm that supports vertex insertions in $O(\sqrt{n})$ amortized time per insertion.
In this paper, we consider two-player impartial games with a pass-move. A disjunctive compound of games is a position in which, on each turn, the current player chooses one of the components and makes a legal move in it. For disjunctive compounds, it is known that by using Sprague-Grundy values (SG-values), the time to determine which player has a winning strategy can be bounded by the sum of times for calculating the SG-values of the components and the time to calculate the XOR of the SG-values. However, if we allow a pass-move during the play, the analysis of such games becomes much more difficult. A pass-move allows each player to skip exactly one turn during the game, but once the pass-move is used by a player, neither player may use a pass-move again. Moreover, once the position is terminal, no pass-move is allowed. We establish a homomorphism on the SG-values of games with a pass-move. That is, if every component satisfies a condition called one-move game, the SG-value of the disjunctive compound of the components with a pass-move is the same as the SG-value of Nim with a pass-move where the size of every pile is the same as the SG-value of every component of the compound. This guarantees that the time to determine which player has a winning strategy in a disjunctive compound with a pass can be bounded by the sum of the time to determine SG-values of all components and the time to determine the SG-value of a position in Nim with a pass-move. We also show how the homomorphism is used for determining SG-values of some Chocolate Games.
A conflict-free coloring of a hypergraph $H$ is an assignment of colors to the vertex set of $H$ such that every hyperedge in $H$ has a vertex whose color is distinct from every other vertex in that hyperedge. The minimum number of colors required for such a coloring is known as the conflict-free chromatic number of $H$. Conflict-free coloring has also been studied on the open/closed neighborhood hypergraphs of a given graph under the name open/closed neighborhood conflict-free coloring. In this paper, we study the list variant of conflict-free coloring where, for every vertex $v$, we are given a list of admissible colors $L_v$ such that $v$ is allowed to be colored only from $L_v$.
It was shown by Pach and Tardos [Combinatorics, Probability and Computing, 2009] that for any constant $\epsilon > 0$, the closed-neighborhood conflict-free chromatic number of a graph $G$ is at most $O(\ln^{2 + \epsilon}\Delta)$, where $\Delta$ represents the maximum degree of $G$. Later, Glebov, Szabó, and Tardos [Combinatorics, Probability and Computing, 2014] showed that there exist graphs $G$ that require $\Omega(\ln^2\Delta)$ colors for a closed neighborhood conflict-free coloring. Bhyravarapu, Kalyanasundaram, and Mathew [Journal of Graph Theory, 2021] bridged the gap between the upper and the lower bound. They showed that the closed-neighborhood conflict-free chromatic number of any graph $G$ is at most $O(\ln^2 \Delta)$.
In this paper, we extend the $O(\ln^2 \Delta)$ upper bound to the list variant of the closed-neighborhood conflict-free chromatic number. Further, we establish several computational complexity results concerning the list open/closed-neighborhood conflict-free chromatic numbers.
A well-known consequence of Brooks' Theorem is that $3$-Colouring, a.k.a. $CSP(K_3)$, is solvable in polynomial time for subcubic graphs. We begin our investigation by observing that $QCSP(K_3)$, a.k.a. Quantified $3$-Colouring, is Pspace-complete on subcubic planar graphs. This allows us to give a complexity classification for bounded alternation $\Sigma_i-QCSP(K_3)$ ($i\geq 3$, odd) for $\cal{H}$-topological-minor-free graphs. For all such (maybe infinite sets $\cal{H}$), we delineate between those for which $\Sigma_i-QCSP(K_3)$ is solvable in polynomial time, and those for which it is $\Sigma^P_{i-2}$-hard.
We continue our investigation of $QCSP(K_3)$ on $\cal{H}$-subgraph-free graphs, contrasting with both $CSP(K_3)$, as well as the recently introduced C123-framework.
Next, we turn our attention to $P_5$- and $P_4$-free graphs. We prove that $QCSP(H)$ is in NP, for all finite graphs $H$, when the input is restricted to $P_5$-free graphs. For $QCSP(K_3)$, on $P_4$-free graphs, we are able to improve this to a polynomial time algorithm.
Finally, we propose a certain template, the line graph of a subdivided star $L(K^r_{1,4})$, so that $QCSP(L(K^r_{1,4}))$ behaves curiously on $P_m$-free graphs. When $m$ is small, this problem is solvable in polynomial time. For some medium $m$, it is NP-complete, while for large enough $m$, it is Pspace-complete.
We examine ordered graphs, defined as graphs with linearly ordered vertices, from the perspective of homomorphisms (and colorings) and their complexities. We demonstrate the corresponding computational and parameterized complexities, along with algorithms associated with related problems. These questions are interesting, and we show that numerous problems lead to various complexities. The reduction from homomorphisms of unordered structures to homomorphisms of ordered graphs is proved, achieved with the use of ordered bipartite graphs. We then determine the NP-completeness of the problem of finding ordered homomorphisms of ordered graphs and the XP and W[1]-hard nature of this problem, parameterized by the number of vertices of the image ordered graph. Classes of ordered graphs for which this problem can be solved in polynomial time are also presented.
Differential privacy is the gold standard for privacy preserving data analysis, which is crucial in a wide range of disciplines. Vertex colouring is one of the most fundamental graph problems. In this paper, we study the vertex colouring problem in the differentially private setting.
In this paper, we consider edge-differential privacy. To satisfy non-trivial privacy guarantees under this notion of privacy, a colouring algorithm needs to be defective: a colouring is $d$-defective if a vertex can share a colour with at most $d$ of its neighbours. Without defectiveness, any edge-differentially private colouring algorithm needs to assign $n$ different colours to the $n$ different vertices. We show the following lower bound for the defectiveness: any $\epsilon$-egde differentially private algorithm that returns a $c$-vertex colouring of a graph of maximum degree $\Delta > 0$ with high probability must have defectiveness at least $d \in \Omega \left(\frac{\log n}{\epsilon + \log c}\right)$.
We complement our lower bound by presenting an $\epsilon$-differentially private algorithm for $O\left(\frac{\Delta}{\log n}+\frac{1}{\epsilon}\right)$-colouring a graph with defectiveness at most $O\left(\log n\right)$.
A hypergraph consists of a set of vertices and a set of subsets of vertices, called hyperedges. In the metro map metaphor, each hyperedge is represented by a path (the metro line) and the union of all these paths is the support graph (metro network) of the hypergraph. Formally speaking, a path-based support is a graph together with a set of paths. We consider the problem of constructing drawings of path-based supports that (i) minimize the sum of the number of bends on all paths, (ii) minimize the maximum number of bends on any path, or (iii) maximize the number of $0$-bend paths, then the number of $1$-bend paths, etc. We concentrate on straight-line drawings of path-based tree and cactus supports as well as orthogonal drawings of path-based plane supports with maximum degree $4$.
A plane Hamiltonian path on a set $S$ of distinct points in general position in the Euclidean plane is a crossing-free geometric path such that every point of $S$ is a vertex of the path. It is known that for a sufficiently large set in general position, there exist three edge disjoint plane Hamiltonian paths. In this paper we study an edge-constrained version of the problem of finding Hamiltonian paths in a point set. We first consider the problem of finding a single plane Hamiltonian path $\pi$ with endpoints s, t in $S$ and constraints given by a segment $ab$, where $a$, $b$ in $S$. We consider the following constrained scenarios: (i) segment $ab$ is contained in $\pi$; (ii) segment $ab$ is not contained in $\pi$. We characterize those quintuples $(S,a,b,s,t)$ for which $\pi$ can be found. Secondly, we consider the problem of finding two plane Hamiltonian paths $\pi_1$, $\pi_2$ on a set $S$ with constraints given by a segment $ab$, where $a$, $b$ in $S$. We consider the following constrained scenarios: (i) $\pi_1$ and $\pi_2$ share no edges and segment $ab$ is an edge of $\pi_1$; (ii) $\pi_1$ and $\pi_2$ share no edges and none of them includes segment $ab$ as an edge; (iii) both $\pi_1$ and $\pi_2$ include segment $ab$ as an edge and share no other edges. In all scenarios, we characterize those triples $(S,a,b)$ for which $\pi_1$ and $\pi_2$ can be found.
Given two point sets $P$ and $R$ lying in the first quadrant $Q1$ and such that $(0,0) \in R$, the Rectilinear Steiner Forest Arborescence (RSFA) problem is to find the minimum-length spanning forest $F$ such that each connected component of $F$ is a rectilinear Steiner arborescence rooted at some root in $R$. The RSFA problem is a natural generalization of the Rectilinear Steiner Arborescence (RSA) problem, where $R=\{(0,0)\}$, and thus it is NP-hard (because of the NP-hardness of the RSA problem). Herein, we briefly discuss a polynomial time approximation scheme for the RSFA problem, present a fast $2$-approximation algorithm and provide a fixed-parameter algorithm.
In the solution discovery problem for a search problem on graphs, we are given an initial placement of $k$ tokens on the vertices of a graph and asked whether this placement can be transformed into a feasible solution by applying a small number of modifications. In this paper, we study the computational complexity of solution discovery for several fundamental vertex-subset problems on graphs, namely Vertex Cover Discovery, Independent Set Discovery, Dominating Set Discovery, and Feedback Vertex Set Discovery. We first present XP algorithms for all four problems parameterized by clique-width. We then prove that Vertex Cover Discovery, Independent Set Discovery, and Feedback Vertex Set Discovery are NP-complete for chordal graphs and graphs of diameter $2$, which have unbounded clique-width. In contrast to these hardness results, we show that all three problems can be solved in polynomial time on split graphs. Furthermore, we design an FPT algorithm for Feedback Vertex Set Discovery parameterized by the number of tokens.
Modern computer systems are fundamentally distributed; our everyday life relies on global communication systems such as the Internet and large-scale data centers that house thousands of interconnected computers. At the same time, quantum computing is advancing rapidly, boosted by tens of billions of euros in investment from governments and companies worldwide. But we still do not know if quantum computing and communication can truly help solve practically relevant tasks in the distributed setting!
In this talk, I will discuss three fundamentally different flavors of distributed quantum advantage. The first one is purely computational advantage: if one quantum computer can outperform one classical computer, a network of many quantum computers can outperform a network of many classical computers. But what if the bottleneck is related to communication, not computation?
The second flavor is advantage in communication bandwidth: sending one qubit can be more useful than sending one classical bit; such an advantage has been studied in the framework of the quantum-CONGEST model. However, sending one qubit can also be much more expensive than sending one classical bit, and hence it is unclear if this can ever lead to a practically relevant performance boost in computer networks.
The third flavor is advantage in communication rounds: a network of quantum computers can solve computational tasks in a smaller number of communication rounds than any classical system; a message with one qubit can be more powerful than a message with any number of classical bits. This kind of advantage has been studied in the framework of the quantum-LOCAL model, and if we could realize such an advantage for a practically relevant task, it would result in a convincing, indisputable distributed quantum advantage.
A priori, it is not at all clear that any savings in the number of communication rounds are possible, and there are numerous negative results that put limits on the amount of quantum advantage. But it turns out that there is hope! In a 2019 breakthrough result, Le Gall, Nishimura, and Rosmanis presented the first distributed problem with a strict quantum advantage in the quantum-LOCAL model, and our recent work has provided more examples of problems that admit an asymptotic distributed quantum advantage.
Yet all these examples of quantum advantage have been purely artificial problems, engineered to demonstrate quantum advantage, but serving no practical purpose. Is there any hope of realizing quantum advantage in practice? I will explore the current frontier of what is known and what is unknown, and what are the most promising avenues for overcoming the current barriers.
We study the Requirement Cut problem, a generalization of numerous classical graph cut problems including among others Multicut, Multiway Cut, $k$-Cut, and Steiner Multicut. Given a graph with edge costs, terminal groups $(S_1, ..., S_g)$ and integer requirements $(r_1, ..., r_g)$; the goal is to compute a minimum-cost edge cut that separates each group $S_i$ into at least $r_i$ connected components. Despite many efforts, the best known approximation for Requirement Cut yields a double-logarithmic $O(\log(g)\cdot\log(n))$ approximation ratio as it relies on embedding general graphs into trees and solving the tree instance.
In this paper, we explore two largely unstudied structural parameters in order to obtain single-logarithmic approximation ratios: (1) the number of minimal Steiner trees in the instance, which in particular is upper-bounded by the number of spanning trees of the graphs multiplied by $g$, and (2) the $\operatorname{depth}$ of series-parallel graphs. Specifically, we show that if the number of minimal Steiner trees is polynomial in $n$, then a simple LP-rounding algorithm yields an $O(\log n)$-approximation, and if the graph is series-parallel with a constant depth then a refined analysis of a known probabilistic embedding yields a $O(\operatorname{depth} \cdot \log g)$-approximation on series-parallel graphs of bounded depth. Both results extend the known class of graphs that have a single-logarithmic approximation ratio.
Understanding how a vertex relates to a set of other vertices is a fundamental task in graph analysis. Given a graph $G$ and a vertex set $X \subseteq V(G)$, consider the collection of subsets of the form $N(u) \cap X$ where $u$ ranges over all vertices outside of $X$. These intersections, which we call the traces of $X$, capture all ways in which vertices in $G$ connect to $X$, and in this paper we consider the problem of listing these traces efficiently, as well as the related problem of additionally recording the multiplicity (frequency) of each trace.
For a given query set $X$, both problems have obvious algorithms that scale with $O(|N(X)| \cdot |X|)$ and conditional lower bounds suggest that on general graphs one cannot expect better. However, in certain sparse graph classes better algorithms are possible: Drange \etal (IPEC 2023) used a data structure that answers trace queries in $d$-degenerate graphs with a linear initialisation time and a query time that only depends on the query set $X$ and $d$. However, the query time is exponential in $|X|$, making this approach impractical.
By using a stronger parameter than degeneracy, namely the strong $2$-colouring number $s_2$, we construct a data structure in $O(d \cdot \|G\|)$ time which answers subsequent trace frequency queries in time $O\big((d^2 + s_2^{d+2})|X|\big)$, where $\|G\|$ is the number of edges of $G$, $s_2$ the strong $2$-colouring number and $d$ the degeneracy of a suitable ordering of $G$.
We demonstrate that this data structure is indeed practical and that it beats the simple obvious alternative in almost all tested settings on a collection of 214 real-world networks with up to 1.1M edges. As part of this effort, we demonstrate that computing an ordering with a small strong $2$-colouring number is possible with a very simple heuristic.
Subgraph counting is a fundamental algorithmic problem with many applications, including in the analysis of social and biological networks. The problem asks for the number of occurrences of a pattern graph $H$ as a subgraph of a host graph $G$ and is known to be be computationally challenging: it is $\#W[1]$-hard even when $H$ is restricted to simple structures such as cliques or paths. Curticapean and Marx (FOCS'14) show that if the graph $H$ has vertex cover number $\tau$, subgraph counting has time complexity $O(|H|^{2^{O(\tau)}} |G|^{\tau + O(1)})$.
This raises the question of whether this upper bound can be improved for input graphs $G$ from a restricted family of graphs. Earlier work by Eppstein (IPL'94) shows that this is indeed possible, by proving that when $G$ is a $d$-degenerate graph and $H$ is a biclique of arbitrary size, subgraph counting has time complexity $O(d 3^{d/3} |G|)$.
We show that if the input is restricted to $d$-degenerate graphs, the upper bound of Curticapean and Marx can be improved for a family of graphs $H$ that includes all bicliques and satisfies a property we call $(c,d)$-locatable. Importantly, our algorithm's running time only has a polynomial dependence on the size of $H$.
A key feature of $(c,d)$-locatable graphs $H$ is that they admit a vertex cover of size at most $cd$. We further characterize $(1,d)$-locatable graphs, for which our algorithms achieve a linear running time dependence on $|G|$, and we establish a lower bound showing that counting graphs which are barely not $(1,d)$-locatable is already $\#\text{W}[1]$-hard.
We note that the restriction to $d$-degenerate graphs has been a fruitful line of research leading to two very general results (FOCS'21, SODA'25) and this creates the impression that we largely understand the complexity of counting substructures in degenerate graphs. However, all aforementioned results have an exponential dependency on the size of the pattern graph $H$.
We consider problems of finding a maximum size/weight $t$-matching without forbidden subgraphs in an undirected graph $G$ with the maximum degree bounded by $t+1$, where $t$ is an integer greater than $2$. A $t$-matching is a subset of edges such that each vertex is incident to at most $t$ edges of the subset. Depending on the variant, forbidden subgraphs denote certain subsets of $t$-regular complete partite subgraphs of $G$. A graph is complete partite if there exists a partition of its vertex set such that every pair of vertices from different sets is connected by an edge and vertices from the same set form an independent set. A clique $K_t$ and a bipartite clique $K_{t,t}$ are examples of complete partite graphs. These problems are natural generalizations of the triangle-free and square-free $2$-matching problems in subcubic graphs. In the weighted setting we assume that the weights of edges of $G$ are vertex-induced on every forbidden subgraph. We present simple and fast combinatorial algorithms for these problems. The presented algorithms are the first ones for the weighted versions, and for the unweighted ones, are faster than those known previously. Additionally, the running times of our algorithms are independent of $t$. Our approach relies on the use of gadgets with so-called half-edges. A half-edge of edge $e$ is, informally speaking, a half of $e$ containing exactly one of its endpoints.
In the context of algorithm theory, various studies have been conducted on spanning trees with desirable properties. In this paper, we consider the Minimum Cover Spanning Tree problem (MCST for short). Given a graph $G$ and a positive integer $k$, the problem determines whether $G$ has a spanning tree with a vertex cover of size at most $k$. We reveal the equivalence between MCST and the Dominating Set problem when $G$ is of diameter at most $2$ or $P_5$-free. This provides the intractability for these graphs and the tractability for several subclasses of $P_5$-free graphs. We also show that MCST is NP-complete for bipartite planar graphs of maximum degree $4$ and unit disk graphs. These hardness results resolve open questions posed in prior research. Finally, we present an FPT algorithm for MCST parameterized by clique-width and a linear-time algorithm for interval graphs.
We study the problem of estimating the sum of $n$ elements of a universe $U$, $|U|=n$, each associated with a weight $ w(i)> 0$, within a structured universe. Specifically, our goal is to estimate the sum $W = \sum_{i=1}^n w(i)$ within a factor of $(1 \pm \epsilon)$ using a sublinear number of samples, under the assumption that the weights are non-increasing, i.e., $w(1) \geq w(2) \geq \ldots \geq w(n)$. The sum estimation problem is well-studied under different access models to the universe $U$. However, to the best of our knowledge, nothing is known about the sum estimation problem using non-adaptive conditional sampling. To address this gap, we explore the sum estimation problem using non-adaptive conditional weighted and non-adaptive conditional uniform samples, assuming that the underlying distribution ($D(i)=w(i)/W$) is monotone. Furthermore, we extend our approach to the case where the underlying distribution of $U$ is unimodal. We also address the problem of approximating the support size of $U$ when $w(i) = 0$ or $w(i) \geq W/n$ using hybrid sampling, where both weighted and uniform sampling are available to access $U$.
We provide an algorithm for estimating the sum of $n$ elements where the weights of the elements are non-increasing. Our algorithm requires $O(\frac{1}{\epsilon^3}\log{n}+\frac{1}{\epsilon^6})$ non-adaptive weighted conditional samples and $O(\frac{1}{\epsilon^3}\log{n})$ non-adaptive uniform conditional samples.Our algorithm also follows the $\Omega(\log{n})$ lower bound proposed by [ACK15]. We also extend our algorithm when the underlying distribution $D$ is unimodal. The sample complexity of the algorithm is the same as that of monotone with an additional $O(\log{n})$ adaptive evaluation queries to find the minimum weighted point in the domain $[n]$. Additionally, we investigate the problem of estimating the support size of $U$, which consists of $n$ elements such that the weight corresponding to them is either $0$ or at least $W/n$. Our algorithm uses $O\big( \frac{\log^3{(n/\epsilon)}}{\epsilon^8}\cdot \log^4{\frac{\log{(n/\epsilon)}}{\epsilon}}\big)$ uniform samples and $O\big( \frac{\log{(n/\epsilon)}}{\epsilon^2}\cdot \log{\frac{\log{(n/\epsilon)}}{\epsilon}}\big)$ weighted samples from the universe and approximate the support size $k$ such that $k-2\epsilon n\leq\hat{k}\leq k+\epsilon n$.
We consider basic communication tasks in arbitrary radio networks: $k$-broadcasting and $k$-gathering. In the case of $k$-broadcasting messages from k sources has to get to all nodes in the network. The goal of $k$-gathering is to collect messages from k source nodes in a designated sink nodes. We consider these problems in the framework of distributed algorithms with advice. Krisko and Miller showed in 2021 that the optimal size of advice for $k$-broadcasting is $\Theta(\min(\log \Delta, \log k))$, where $\Delta$ is equal to the maximum degree of a vertex of the input communication graph. We show that the same bound $\Theta(\min(\log \Delta, \log k))$ on the size of optimal labeling scheme holds also for the $k$-gathering problems. Moreover, we design fast algorithms for both problems with asymptotically optimal size of advice. For $k$-gathering our algorithm works in at most $D + k$ rounds, where $D$ is the diameter of the communication graph. This time bound is optimal even for centralized algorithms. We apply the $k$-gathering algorithm for $k$-broadcasting to achieve an algorithm working in time $O(D + \log_2 n + k)$ rounds. We also exhibit a logarithmic time complexity gap between distributed algorithms with advice of optimal size and distributed algorithms with distinct arbitrary labels.
We consider the online buffer minimization in multiprocessor systems with conflicts problem (in short, the buffer minimization problem) in the recently introduced flow model. In an online fashion, workloads arrive on some of the n processors and are stored in an input buffer. Processors can run and reduce these workloads, but conflicts between pairs of processors restrict simultaneous task execution. Conflicts are represented by a graph, where vertices correspond to processors and edges indicate conflicting pairs. An online algorithm must decide which processors are run at a time; so provide a valid schedule respecting the conflict constraints.
The objective is to minimize the maximal workload observed across all processors during the schedule. Unlike the original model, where workloads arrive as discrete blocks at specific time points, the flow model assumes workloads arrive continuously over intervals or not at all. We present tight bounds for all graphs with four vertices (except the path, which has been solved previously) and for the families of general complete graphs and complete bipartite graphs. We also recover almost tight bounds for complete k-partite graphs.
For the original model, we narrow the gap for the graph consisting of a triangle and an additional edge to a fourth vertex.
In Duration-aware Pinwheel Scheduling (DAPS), each job $i$ has a duration $p_i$ (in days) and the objective is to find a schedule on a single machine such that each job $i$ is completed at least once in every consecutive $f_i$ days. This problem generalizes the well-known problem, (Ordinary) Pinwheel Scheduling, which assumes all jobs are unit-length (i.e., $p_i = 1$). Previous studies have shown that in order to schedule jobs $1$ through $n$ feasibly, it is necessary that the density of the instance, defined as the sum of $p_i / f_i$ for $i$ from $1$ to $n$, is at most $1$. Additionally, in the case of Ordinary Pinwheel Scheduling, every instance with density at most $5/6$ is known to be always schedulable. However, it remains open whether a similar density threshold would lead to an efficient algorithm for deciding schedulability in DAPS. In this paper, we present a negative solution to this open problem under the assumption that P ≠ NP, namely, we show that no constant density threshold leads to a polynomial-time algorithm for deciding schedulability in DAPS unless P = NP. Furthermore, we demonstrate that another heuristic from Ordinary Pinwheel Scheduling fails to extend to DAPS. After that, we present an online scheduler which always schedule jobs feasibly for some DAPS instance class, by inventing a new measurement of schedulability of instances. This scheduler can be regarded as a $3$-approximation algorithm for the optimization version of DAPS — known as Continuous Bamboo Garden Trimming on star graphs — which improves upon the previous best approximation ratio $3 + 2\sqrt{2}$.
For a sequence of tasks, each with a positive integer period, the pinwheel scheduling problem involves finding a valid schedule in the sense that the schedule performs one task per day and each task is performed at least once every consecutive days of its period. It had been conjectured by Chan and Chin in 1993 that there exists a valid schedule for any sequence of tasks with density, the sum of the reciprocals of each period, at most $\frac{5}{6}$. Recently, Kawamura settled this conjecture affirmatively. In this paper we consider an extended version with real periods proposed by Kawamura, in which a valid schedule must perform each task $i$ having a real period $a_{i}$ at least $l$ times in any consecutive $\lceil l a_{i} \rceil$ days for all positive integer $l$. We show that any sequence of tasks such that the periods take three distinct real values and the density is at most $\frac{5}{6}$ admits a valid schedule. We hereby conjecture that the conjecture of Chan and Chin is true also for real periods.