Winter Term 2019/20

March 11, 2020, 14:00 pm, F1.110

Max-Line formation with highly limited robots in the continuous time model

Since small mobile robots become inexpensive the tendency increased to exchange big, powerful and expensive systems by several lightweight, simple and cheap robots. Amongst other possible tasks, such swarms can be used to enable communication by providing fields or chains of mobile relays. Because of the desired simplicity of such robots, no ensured initial placement and a variety of setting definitions the problem of building communication formations has still many open questions and is therefore an interesting research field.

This Master Thesis focuses on the problem of forming a maximum long line of robots (Max-Line formation problem) and refers with that to the current research of the Algorithms and Complexity group of the Heinz Nixdorf Institute which considers the Max-Chain formation problem. The presentation will introduce the Max-Line formation problem and show interesting observations and approaches based on a mix of gathering and chain ideas.

(Introductory talk Masters Thesis)

February 26, 2020, 14:00 pm, F1.110

Local Gathering of Mobile Robots in Three Dimensions

As technology advances, large groups of small-scale robots may be used to perform a variety of tasks. A particular problem that has been studied is the gathering problem, in which the robots have to meet at some non-specied point from an arbitrary connected starting conguration. However, they are typically assumed to have only very limited capabilities: They only have a limited viewing range, have no memory of the past, cannot be distinguished from one another, are not allowed to communicate and do not even share a common coordinate system.

Several results are already known about this setting. However they have so far only been shown for robots that act in the two-dimensional Euclidean plane. In this talk, we will instead consider this problem in the three-dimensional Euclidean space. In particular, two different time models will be considered: For a discrete time model, an extension of the analysis of the Go-to-the-Center algorithm to three dimensions will be presented. Furthermore, we will discuss a continuous time model. In this model, a runtime bound for the class of contracting algorithms will be presented along with an experimental evaluation of three potential concrete algorithms.

February 26, 2020, 9:45 am, F1.110

Algorithm for selecting a set of viewpoints from which a maximum number of the exterior surfaces of an object are visible

The scenes in computer graphics are becoming more and more complex, which can lead to scenes that are not renderable in real-time. A way to solve this problem is to use point based rendering methods like Blue Surfels[1], which render a dotted surface instead of a triangle mesh surface. The geometric simplicity of these dotted surfaces allow the rendering of high complex scenes in real-time. In the case of Blue Surfels [1] a set of rendered images of an object can be used to create the dotted surface. The selection of this set of images is a not trivial task and the topic of this talk.

The first part of this talk describes how the rendering pipeline of OpenGL can be used to determine which exterior surfaces of an object are visible from a viewpoint. From the obtained data a defined quality of a viewpoint is calculated in order to compare different viewpoints.

In the second part of this presentation various algorithms are presented that calculate a set of viewpoints for an object. A greedy algorithm chooses the best viewpoints of this set based on the previously defined quality of a viewpoint. The goal of this selection is to determine a small set of viewpoints from which a maximum number of the exterior surfaces of an object are visible.

[1]Sascha Brandt, Claudius Jähn, Matthias Fischer, Friedhelm Meyer auf der Heide. Visibility-Aware Progressive Farthest Point Sampling on the GPU, 2019.

(Final Presentation Bachelor's Thesis, talk in German)

January 29, 2020, 14:00 pm, F1.110

Mincut sensitivity data structures for insertion of an edge

Let $G=(V,E)$ be an undirected graph on n vertices with nonnegative capacities on its edges. We address the problem of building compact data structures that can efficiently report the set of pairs of vertices whose mincut value gets increased upon insertion of any new edge or increasing the capacity of any existing edge. We present the following results for the single source and all-pairs version of this problem.

SINGLE SOURCE:
Given any designated source vertex s, there exists a O(n) size data structure that can output the set of all the vertices whose mincut to s is increased upon insertion of any edge. The query time is O(n). Note that the size of the data structure is optimal, and the query time is optimal when k=\Theta(n). We also show that for directed graphs, any data structure for single source mincut sensitivity will need \Omega(n^2) bits in the worst case irrespective of the query time it can guarantee.

ALL-PAIRS:
There exists an O(n^2) size data structure that can output the set of all-pairs of vertices whose mincut value is increased upon insertion of any edge. The query time is O(k) where k is the size of the output set. Note that the query time is optimal, and the size of the data structure is better than the trivial O(n^3) size data structure for this problem by a factor of n.

In order to derive our results, we use interesting insights into Closest and Farthest mincuts for a pair of vertices. In addition, a crucial result that we derive and use is that there exists a directed acyclic graph on O(n) vertices and O(n) edges that compactly stores farthest mincuts from all vertices to a designated vertex s in the graph. This result also complements the 2009 result of Hariharan et al.\ [STOC 2009] that showed that the closest mincuts from all vertices to s is a laminar family and hence can be stored in O(n) space only.

We believe that this graph theoretic result is of independent interest.
NOTE: This research work has been carried out in collaboration with Till Knollmann and Shiv Gupta.

January 8, 2020, 14:00 pm, F1.110

Approximating Weighted Completion Time for Order Scheduling with Setup Times

Consider a scheduling problem in which jobs need to be processed on a single machine. Each job has a weight and is composed of several operations belonging to diﬀerent families. The machine needs to perform a setup between the processing of operations of diﬀerent families. A job is completed when its latest operation completes and the goal is to minimize the total weighted completion time of all jobs. We study this problem from the perspective of approximability and provide constant factor approximations as well as an inapproximability result. Prior to this work, only the NP-hardness of the unweighted case and the polynomial solvability of a certain special case were known.

December 18, 2019, 2:00 pm, F1.110

Computer Science of Numerics: Continuous Abstract Data Types for a Computer ANALYSIS System

Since introduction of the IEEE 754 floating point standard in 1985, numerical methods have become ubiquitous and increasingly sophisticated. With growing code complexity of numerical libraries grows the need for rigorous Software Engineering: as provided by Computer Science and state of the art regarding digital processing of discrete data, but lacking in the continuous realm, resulting in numerical disasters like Sleipner-A or Ariane V-501.

We apply, adapt, and extend the classical concepts (specification, algorithmics, analysis, complexity, verification) from discrete (bit strings, integers, graphs etc.) to continuous data: real numbers/tensors/polynomials, converging sequences, analytic/smooth/integrable functions, bounded operators, and compact subsets.

A new paradigm reconciles between the BSS model (aka real-RAM) and Computable Analysis: real numbers are provided as exact without rounding errors, but testing for inequality has to become semi-decidable that is, a partial or non-extensional "soft test".

We prove it Turing-complete over the reals and extend this approach to higher mathematical structures as continuous Abstract Data Types.

Following the last decades' seminal interplay between Discrete Mathematics and Computer Science, Future Numerics revolves around Computer Science bridging between Pure and Applied continuous Mathematics.

December 13, 2019, 9:00 am, F0.225

Inapproximability Results for Scheduling with Interval and Resource Restrictions

In the restricted assignment problem, the input consists of a set of machines and a set of jobs each with a processing time and a subset of eligible machines. The goal is to find an assignment of the jobs to the machines minimizing the makespan, that is, the maximum summed up processing time any machine receives. Herein, jobs should only be assigned to those machines on which they are eligible. It is well-known that there is no polynomial time approximation algorithm with an approximation guarantee of less than 1.5 for the restricted assignment problem unless P=NP. In this work, we show hardness results for variants of the restricted assignment problem with particular types of restrictions.

In the case of interval restrictions the machines can be totally ordered such that jobs are eligible on consecutive machines. We resolve the open question of whether the problem admits a polynomial time approximation scheme (PTAS) in the negative (unless P=NP). There are several special cases of this problem known to admit a PTAS.

Furthermore, we consider a variant with resource restriction where each machine has capacities and each job demands for a fixed number of resources. A job is eligible on a machine if its demand is at most the capacity of the machine for each resource. For one resource, this problem is known to admit a PTAS, for two, the case of interval restrictions is contained, and in general, the problem is closely related to unrelated scheduling with a low rank processing time matrix. We show that there is no polynomial time approximation algorithm with a rate smaller than 48/47 or 1.5 for scheduling with resource restrictions with 2 or 4 resources, respectively, unless P=NP. All our results can be extended to the so called Santa Claus variants of the problems where the goal is to maximize the minimal processing time any machine receives.

04. Dezember 2019, 14.00 Uhr, F1.110

Analyse des Online Facility Location Problems mit mobilen Facilities

Beim ursprünglichen Facility Location Problem sind eine Menge von Requests und eine Menge von möglichen Facility Locations im metrischen Raum gegeben. Die Aufgabe ist es, eine Teilmenge der Facility Locations anzugeben und jede Request mit einer Facility aus der gewählten Menge zu verbinden, wobei Kosten für das Bedienen der Requests (die Distanz zu der verbundenen Facility) und das Eröffnen der Facilities entstehen. Das Ziel ist es, die Gesamtkosten zu minimieren.

Anders als beim ursprünglichen Facility Location Problem werden in diesem Vortag Algorithmen für zwei online Varianten vorgestellt, bei denen die Requests nacheinander ankommen und es auch möglich ist, die ursprüngliche Position einer Facility zu verändern. In der ersten Variante entstehen dabei keine Kosten für das Verschieben der Facilities, aber die Distanz, um welche eine Facility verschoben werden darf, ist begrenzt. In der zweiten Variante ist die maximale Verschiebedistanz nicht mehr beschränkt, allerdings entstehen konstante Kosten für jede Bewegung einer Facility. In beiden Varianten wird dabei vom etwas spezifischeren euklidischen Raum und von uniformen Eröffnungskosten der Facilities ausgegangen. Außerdem müssen Requests nach deren Ankunft unwiderruflich einer Facility mit entsprechenden Bedienkosten zugeordnet werden.

November 22, 2019, 17:00 pm, F0.231

Alljährliches Kooperationskolloquium

Die beiden Vortragenden unseres diesjährigen  Kolloquiums sind Absolventen unserer Arbeitsgruppe. Sie berichten über ihre aktuellen Tätigkeitsfelder in der industriellen und wissenschaftlichen Praxis.

Jaroslaw Kutylowski ist Mitgründer und Geschäftsführer (CEO) der DeepL GmbH in Köln. Er leitete die Einführung des Dienstes DeepL Pro. DeepL ist der derzeit beste Dienst im Bereich der automatischen Sprachübersetzung. Er richtet sich an professionelle Benutzer und hat sich als ein wichtiger Player im KI-Sektor etabliert. Jaroslaw Kutylowski promovierte im Jahr 2007 in unserer Arbeitsgruppe. Seine Dissertation begründete unseren Forschungsschwerpunkt über algorithmische Grundlagen von Roboterschwärmen.

Martin Ziegler ist Professor für Theoretische Informatik am Korea Advanced Institute of Science and Technology (KAIST) in Südkorea. Vorausgegangen sind wissenschaftliche Aufenthalte in Dänemark, Japan, Österreich. Seine Forschung dreht sich um algorithmische Grundlagen der Numerik: von Berechenbarkeit über Komplexität und Logik bis Computerphysik. Martin Ziegler promovierte im Jahr 2004 in unserer Arbeitsgruppe mit einer Arbeit über die Berechenbarkeit reeller geometrischer Probleme. Seine Habilitation erfolgte 2008 an der Universität Paderborn.

new Date: November 20, 2019, 14:00 pm, F1.110

Mobile Resource Allocation

We consider applications involving many users who upload data over the internet. In order to minimize response times, we seek to place instances of the corresponding service close to the users, however each instance of the service comes at a cost, and relocating induces cost proportional to the distance an instance is being moved. In addition, we only allow services being migrated over a limited distance.

In the first part of the talk, I will briefly summarize the findings of my PhD project concerned with the above problem. The theoretical models are close to classical problems such as the k-Server and Online Facility Location problems. In the second part, I will discuss potential future research based on the work done so far as well as other interesting approaches in the broad area of resource allocation.

October 30, 2019, 14:00 pm, F1.110

A Matter of Taste – Dealing with requests for different types

We present current results on the Online Multi-Commodity Facility Location Problem (OMFLP) in metric spaces. In comparison to the Online Facility Location Problem (OFLP), the OMFLP considers the case that requests have a type out of a finite type set S. An arriving request has to be assigned immediately to a facility serving its type incurring an assignment cost and construction costs for facilities depending on the set of types and the position where a facility is opened. The task is to minimize the total assignment and construction cost.

In earlier talks we showed a lower bound involving the length of the request sequence as well as the number of types. In this talk, we are going to present a deterministic algorithm for the OMFLP with non trivial competitiveness. Further, we are going to discuss special cases where the competitiveness changes depending on restrictions to the cost function and introduce changes to the model raising interesting new problems for future research.

October 16, 2019, 14:00 pm, F1.110

Stretching a Communication Chain of Oblivious Mobile Robots — The Continuous Case

In the talk, we consider the problem of arranging a communication of n robots on a straight line of length n-1. The robots have a limited viewing range of 1, do not agree on a coordinate system or compass and are oblivious. In contrast to previous talks about this problem, we consider the continuous time model in which robots can continuously observe their environment and can immediately adjust their own trajectory based on their observations. We introduce the Max-Move-On-Bisector strategy, analyze its behavior in one-dimensional configurations and give insights about the current state of research about two-dimensional configurations.

October 09, 2019, 14:00 pm, F1.110

Visibility-Aware Progressive Farthest Point Sampling on the GPU

In this talk, I present the first algorithm for progressive sampling of 3D surfaces with blue noise characteristics that runs entirely on the GPU. The performance of the algorithm is comparable to state-of-the-art GPU Poisson-disk sampling methods, while additionally producing ordered sequences of samples where every prefix exhibits good blue noise properties.

The basic idea is, to reduce the 3D sampling domain to a set of 2.5D images which we sample in parallel utilizing the rasterization hardware of current GPUs. This allows for simple visibility-aware sampling that only captures the surface as seen from outside the sampled object, which is especially useful for point-based level-of-detail rendering methods. However, the method can be easily extended for sampling the entire surface without changing the basic algorithm. I provide a statistical analysis of our algorithm and show that it produces good blue noise characteristics for every prefix of the resulting sample sequence and analyze the performance of the method compared to related state-of-the-art sampling methods.

October 02, 2019, 14:00 pm, F1.544

Local Gathering of Mobile Robots in Three Dimensions

In the future, large groups of inexpensive robots might be used to collectively perform a variety of tasks. In order to facilitate this, there has been a considerable interest in the investigation of related algorithmic and complexity problems.

One particular problem that has been studied is robot-gathering in which all robots have to meet at some (non-specified) point in a Euclidean space, starting from an arbitrary configuration. Furthermore, it is typically assumed that the robots only have a limited vision range, share no information and have to act in a distributed fashion.

In this setting and under a synchronous, discrete time model, Ando et al. presented an algorithm called Go-to-the-Center, that solves this problem. Degener et al. could then show that this takes \theta(n^2) rounds. Under a continuous time model, Podlipyan described the class of contracting algorithms, for which he could show a gathering time of O(nd), where d is the initial diameter of the configuration. Kempkes et al. could show that a member of this class known as Move-on-Bisector can solve gathering in time O(n) in the continuous time setting.

However, all of these results have so far only been established in the two-dimensional Euclidean space. The goal of my master's thesis is therefore to attempt to find generalizations of these results for three or even higher dimensions.

(Introductory presentation Master's thesis)