Sommersemester 2017

Daniel Jung

September 4, 2017, 11:00 am, F1.310

Gathering Anonymous, Oblivious Robots on a Grid

We consider a swarm of n autonomous mobile robots, distributed on a 2-dimensional grid. A basic task for such a swarm is the gathering process: All robots have to gather at one (not predefined) place. A common local model for extremely simple robots is the following: The robots do not have a common compass, only have a constant viewing radius, are autonomous and indistinguishable, can move at most a constant distance in each step, cannot communicate, are oblivious and do not have flags or states. The only gathering algorithm under this robot model, with known runtime bounds, needs O(n²) rounds and works in the Euclidean plane. The underlying time model for the algorithm is the fully synchronous FSYNC model. On the other side, in the case of the 2-dimensional grid, the only known gathering algorithms for the same time and a similar local model additionally require a constant memory, states and "flags" to communicate these states to neighbors in viewing range. They gather in time O(n).
In this paper we contribute the (to the best of our knowledge) first gathering algorithm on the grid that works under the same simple local model as the above mentioned Euclidean plane strategy, i.e., without memory (oblivious), "flags" and states. We prove its correctness and an O(n²) time bound in the fully synchronous FSYNC time model. This time bound matches the time bound of the best known algorithm for the Euclidean plane mentioned above. We say gathering is done if all robots are located within a 2x2 square, because in FSYNC such configurations cannot be solved.

Jonas Manuel Hanselle

July 24, 2017, 2:00 pm, F1.110

Development of an adaptive approximate illumination algorithm using surfels for global illumination

Global illumination is one of the key challenges in computer graphics and an essential task for rendering photorealistic images. For an exact calculation of the light in a scene the interaction of the light and all objects in the scene has to be simulated based on exact physical models. Because of the high computational cost of such simulations, methods based on correct physical simulations are usually not suitable for real-time applications. Therefore, in most real-time applications an approximation of indirect light is used. This thesis presents an algorithm for approximating global illumination in static scenes with dynamic light based on a point approximation of the original scene. A subset of these points is used as photons. The photons gather indirect light from the approximated scene and calculate a diffuse color value from it. Afterwards the photons are used as virtual point lights and send indirect light of the calculated color value into the scene. Since the light gathering is the bottleneck of the algorithm, an adaptive approach was developed, which recalculates indirect light only locally where it is necessary. For this purpose, a subset of the points is selected as notification points that observe their current illumination status and notify the affected photons when it changes. The notified photons subsequently recalculate their color values.

(Final presentation Bachelor's Thesis, talk in German)

Matthias Feldotto

July 19, 2017, 2:00 pm, F1.110

Computing Approximate Pure Nash Equilibria in Shapley Value Weighted  Congestion Games

We study the computation of approximate pure Nash equilibria in Shapley value (SV) weighted congestion games, introduced by Kollias and Roughgarden. This class of games considers weighted congestion games where Shapley values are used as an alternative (to proportional shares) for distributing the total cost of each resource among its users. We focus in the interesting subclass of such games with polynomial resource cost functions and present an algorithm that computes approximate pure Nash equilibria with a polynomial number of strategy updates. Since computing a single strategy update is hard, we apply sampling techniques which allow us to achieve polynomial running time. The algorithm builds on the algorithmic ideas of Caragiannis et al., however, to the best of our knowledge, this is the first algorithmic result on computation of approximate equilibria using other than proportional shares as player costs in this setting. We present a novel relation that approximates the Shapley value of a player by her proportional share and vice versa. As side results, we upper bound the approximate price of anarchy of such games and significantly improve the best known factor for computing approximate pure Nash equilibria in weighted congestion games of Caragiannis et al.

Daniel Schmand

RWTH Aachen University

July 17, 2017, 10:00 am, F1.310

Network Design Games with Bandwidth Competition and Selfish Followers

We study the following network design game. The game is set in two stages. In the first stage some players, called providers, aim to maximize their profit individually by investing in bandwidth on edges of a given graph. The investment yields a new graph on which Wardrop followers, called users, travel from their source to their sink through the network. The cost for any user on an edge follows market principles and is dependent on the demand for that edge and the supplied bandwidth. The profit of the providers depends on the total utilization of their edges, the current price for their edges and the bandwidth. We analyze the existence and uniqueness of Nash Equilibria for the providers in the described game. We provide insights on how competition between providers and the number of providers influence the total cost for the users.

Joined work with Alexander Skopalik (Paderborn University), Carolin Tidau (RWTH Aachen University)

Felix Biermeier

July 12, 2017, 2:00 pm, F1.110

Continuous Monitoring of Distributed Data Streams for Partitioning Problems

Continuous monitoring of distributed data streams attracts attention in recent years. In this thesis we introduce and analyze the $\gamma$-Interval Partitioning Monitoring problem in which the server is asked to continuously output a disjoint partition of $n$ nodes into intervals such that nodes of the same interval observe similar values with respect to an error value $\gamma$. The goal is to minimize the communication cost between the server and the nodes. We present randomized algorithms for the $\gamma$-Interval Partitioning Monitoring problem and variants of it. At first, we present a randomized algorithm that solves a simplified variant of this problem for a single time step  using O(m*log(\gamma)*log(n))$ messages in expectation. The factor $m$ is the minimum partition size of an optimal solution. Then, we present a filter-based online algorithm that solves the $\gamma$-Interval Partitioning Monitoring problem and compare it with an restricted optimal offline algorithm that is also filter-based and its partition size is bounded by a parameter $k$. We show that the online algorithm is O(k*log²(n)*log(\gamma))-competitive. At last, we investigate another variant of the  $\gamma$-Interval Partitioning Monitoring problem in which the online algorithm is allowed to double its error $\gamma$, whereas the restricted optimal offline algorithm remains as before. We present a filter-based online algorithm that improves the previous competitiveness from O(k*log²(n)*log(\gamma)) to O(k*log²(n)).

(Final presentation Master's Thesis)

Björn Feldkord

July 12, 2017, 2:30 pm, F1.110

The Mobile Server Problem

We introduce the mobile server problem, inspired by current trends to move computational tasks from cloud structures to multiple devices close to the end user. An example for this are embedded systems in autonomous cars that communicate in order to coordinate their actions.
Our model is a variant of the classical Page Migration Problem. We consider a mobile server holding a data page. The server can move in the Euclidean space (of arbitrary dimension). In every round, requests for data items from the page pop up at arbitrary points in the space. The requests are served and the mobile server may move within the plane up to a given maximum distance.
We show that no online algorithm can achieve a competitive ratio independent of the length of the input sequence in this setting. Hence we augment the maximum movement distance of the online algorithms. We provide a deterministic algorithm which is simple to describe and works for multiple variants of our problem. The algorithm achieves almost tight competitive ratios independent of the length of the input sequence.

Alexander Mäcker

July 12, 2017, 3:00 pm, F1.110

Non-Clairvoyant Scheduling to Minimize Max Flow Time on a Machine with Setup Times

Consider a problem in which $n$ jobs that are classified into $k$ types arrive over time at their release times and are to be scheduled on a single machine so as to minimize the maximum flow time. The machine requires a setup taking $s$ time units whenever it switches from processing jobs of one type to jobs of a different type. We consider the problem as an online problem where each job is only known to the scheduler as soon as it arrives and where the processing time of a job only becomes known upon its completion (non-clairvoyance).
We analyze a simple modification of the FIFO strategy and show the competitiveness to be $\Theta(\sqrt{n})$, which is optimal for ``greedy-like'' algorithms. We will then also consider a smoothed analysis of the competitiveness. The smoothed competitiveness turns out to only be $O(\varepsilon^{-2} \log^2 n)$ when processing times $p_j$ are independently perturbed by adding a random value uniformly drawn from $[-\varepsilon p_j, \varepsilon p_j]$, $0 < \varepsilon < 1$.

Vipin Ravindran Vijayalakshmi

June 28, 2017, 2:00 pm, F1.110

Bounding the Inefficiency of Equilibria in Congestion Games under Taxation

Congestion Games belong to a class of games in which a pure Nash equilibrium always exists. However, it is known that the pure Nash equilibrium need not be optimal with respect to the social cost due to the non-cooperative nature of the players in the game. This inefficiency of Nash equilibria is termed as the Price of Anarchy (PoA). We study the effects of economic incentives such as Taxes and their ability to induce desirable behavior amongst players in a congestion game. The basic idea involves transforming any congestion game G into a game G', where the player disutility is comprised of the cost incurred by the player using a particular resource and the additional tax imposed for using that resource. The PoA of the game G is then analyzed with respect to the equilibrium in the game G’. The focus here is to construct a methodology that determines the feasible taxes in order to minimize the inefficiency of equilibrium in congestion games with linear and polynomial cost functions which are strictly non-decreasing and with non-negative coefficients in the presence of refundable taxes. The hope is that the analyses can be further extended towards games with arbitrary cost functions and for any objective function associated with a congestion game.

(Introductory presentation Master's Thesis)

Shouwei Li

June 14, 2017, 2:00 pm, F1.110

Parallel Iterative Compression

Iterative Compression is a general technique for parameterized minimization problems, where the goal is to find a small set of vertices or edges of the graph, whose removal makes the graph admit some global property. The upper bound on the size of this set is the parameter $k$. In this talk, we will try to parallel this technique to obtain some FPPT algorithms for several problems, such as vertex cover, where FPPT stands for polylog running time depends on the parameter $k$ with a polynomial number of processors.

Sascha Brandt

June 14, 2017, 2:30 pm, F1.110

Rendering of Complex Heterogenous Scenes for VR and HMD using Progressive Blue Surfels

In this talk I present a technique for rendering highly complex 3D scenes in real-time by generating evenly distributed points on the scene's surface. The technique is applicable to a wide range of scene types, like scenes directly based on complex and detailed CAD data consisting of billions of polygons (in contrast to scenes handcrafted solely for visualization). This allows to visualize such scenes smoothly even in VR on a HMD with a high image quality, while maintaining the necessary frame-rate. In contrast to other point based rendering methods, we place points in an approximated blue noise distribution only on visible surfaces and store them in a highly GPU efficient data structure, allowing to progressively refine the number of rendered points to maximize the image quality for a given target frame rate. Our evaluation shows that scenes consisting of a high amount of polygons can be rendered on an Oculus HMD with interactive frame rates with good visual quality on standard hardware.

Joshua Nowack

May 31, 2017, 2:00 pm, F1.110

On-The-Fly Construction of Connected Road Networks from Given Components

Virtual landscapes and road networks are used in many applications, e.g.vehicle simulations. Often realistic road courses are important for this type of application. It is a time-consuming task to construct the needed landscapes and roads. If the user should be able to extend the road network while in use, a quick construction especially is of importance.
In this thesis a technique is being presented that enables a user to construct road networks by combining individual map parts, to offer a quick way to create and extend the desired road network. Real map data is used in the process. This reduces the time needed for construction and ensures a realistic course of roads. The introduced algorithm will manipulate terrain and road network to create a realistic landscape.
The algorithm performs the following steps: adjustment of the heightmap by interpolation, adjustment of roads, removal of unnecessary roads and creation of new connecting roads. A prototype will be used to analyse the results of the algorithm.

(Final Presentation Bachelor's Thesis)

Pavel Podlipyan

May 31, 2017, 2:30 pm, F1.110

Continuous gathering without collisions

In this talk we are going to consider the gathering problem of $n$ autonomous, mobile robots with bounded viewing range in the continuous time model on the Euclidean plane. For all continuous Gathering algorithms on Euclidean plane we introduce simple convergence criterion and according class of gathering algorithms that solve the gathering problem in time $O(nd)$, where $d$ diameter of initial configuration.
In the second part we introduce Go-To-The-Relative-Center (GTRC) algorithm. We show that GTRC algorithm correctly solves continuous Gathering problem in time $O(n^2)$. Then we will consider relaxed robot model and using convergence criterion we develop Safe-Go-To-The-Relative-Center algorithm. We show that in the relaxed model non oblivious, chiral and luminous robots solve continuous Gathering problem in time $O(n^2)$ without collisions.

Jan Bürmann

May 24, 2017, 2:00 pm, F1.110

Complexity of Signalling in Routing Games under Uncertainty

It is common that self-minded drivers are informed over the congestion on the road via traffic reports; the question is how much information is necessary and what information should be given to improve traffic flow and reduce average travel time. This work extends Wardrop's routing model by including a random demand to define a new signalling model.  This new routing model is used to study the change in equilibrium flow for linear functions under the changes in signalling probability and the necessary number of signals.  The work also presents an algorithmic computation of a signalling policy. For the number of signals, an instance is defined that gives a lower bound on the signals.  For the equilibrium flow, the equilibrium condition is used to derive equations for the change in latency and an $\varepsilon$-approximation algorithm for two demand values and two signals is devised. The result on the lower bound is that at least as many signals as demand values are necessary. For the results on the equilibrium flow and the expected cost, the change of the flow and the expected flow depends on multiplicative factors of $(1+\varepsilon\tilde{r})^{n-1}$ and $(1-\varepsilon\tilde{r})^{n-1}$ for parallel link networks, and $(1-\varepsilon\tilde{r})^{(n-1)2^{(n-1)}}$ and $(1+\varepsilon\tilde{r})^{(n-1)2^{(n-1)}}$ for the general networks, where $n$ is the number of paths and $\tilde{r}$ is the positive difference of the demand values. The algorithm has a runtime of:
\begin{align*}
    \BigO{\left(R_{EQ} + 2|E|\right) \cdot \left(\frac{\Prob{r_1}\Prob{r_2} + \varepsilon + 1.5\varepsilon^2}{\varepsilon^2}\right)},
\end{align*}
where $R_{EQ}$ is the runtime to calculate an equilibrium and $|E|$ is the number of edges in the network. The quality of the solution follows from the results on the equilibrium flow change. The results give a first insight into the area and the model, and the results on the equilibrium flow and the algorithm leave the opportunity for extensions.

(Final Presentation Master's Thesis)

Sören Riechers

May 24, 2017, 2:30 pm, F1.110

Fully Dynamic Bin Packing without Size Limit

In my talk, I introduce a model where a bin packing needs to be sustained over time. At any time, one item of arbitrary size can arrive or depart. A bin packing with as few bins as possible is to be sustained. In each time step, the number of bins in the current packing is divided by the minimum number of bins that could be achieved by repacking the bins optimally. The competitive ratio is then determined by maximizing this ratio over all time steps.
For the case where small items may be grouped, there exists a PTAS for this problem. For the general case, no upper bound for this problem is known. A lower bound of 4/3 exists and was later improved to approximately 1.3871.
I introduce an approach how we can easily obtain a (3/2)-approximation for this problem by treating small and large items separately. I will then show how this could potentially be improved to \sqrt{2}. By using more complicated techniques inspired by the lower bound of 1.3871, we hope that we can even further improve this algorithm and achieve a (1.3871+\varepsilon)-approximation, which is basically the best we can hope for.

Johannes Schaefer

May 17, 2017, 2:00 pm, F1.110

The problem of placing complex services in (complex) networks

Modern IT services are composed of a multitude of smaller service components, some of them shared between multiple services. Every of these service components have resource needs, depending on the size of the input they get, which equally determines the communication volume between machines/data centers that host collaborating components. Networks of data centers (or combinations of multiple data centers) are defined by the resources of nodes in form of computational power, memory and storage capacities, as well as of the link quality (w.r.t. bandwidth and delay) between the nodes. The goal is to place and connect service components in such a way that the resource constraints are not violated or the violation is minimal.
We aim at finding models and algorithms that allow efficient computation of (approximate) solutions to simplified versions of the aforementioned scenario and will discuss our starting points in this area.

Manuel Malatyali

May 17, 2017, 2:30 pm, F1.110

A communication-efficient dynamic distributed data structure for top-k and approximate-select queries

We consider an extension of the distributed monitoring model: $n$ nodes receive updates of their integer values, they can send their current value to the server, the server can broadcast one such value to all nodes. Typically, the tasks considered in literature are to let the server monitor some information about the current values of the clients, using a communication volume as small as possible.
In this paper, we extend such monitoring tasks to distributed dynamic data structures, where updates and queries are interleaved arbitrarily. We consider the following updates and queries: In each step, some nodes receive updates of their values (UPDATE-BATCH), or a query like TOP-$k$ (output the $k$ smallest values) or APPROXIMATE-$k$-SELECT (Output an element with a rank "close to'' $k$) has to be answered by the server.
We present a randomised distributed data structure for this task. Proving lower bounds we show that the communication volume needed by our strategy is close to optimal. In addition, we present a trade-off between the overall communication volume and the number of communication rounds of our data structure.

Michael Braun

May 10, 2017, 2:00 pm, F1.110

Continuous Aggregation in Dynamically Connected Networks

The study of dynamic networks in which links constantly change and don’t stabilize has recently gained popularity in order to cope with modern technologies that exhibit this behavior, such as wireless ad-hoc networks. However, existing works typically include the assumption that these networks remain fully connected at all times. In this work, this assumption is removed. Instead, a model is proposed in which n nodes communicate within a dynamic graph, where the only requirements are that the network remains stable for blocks of T rounds and that each node can causally influence every other node within d many of those blocks. Additionally, the size of connected components can be restricted to a minimum size of C at all times. Within this model, the following problems are studied: The first is k-token dissemination, in which k pieces of information have to be spread to all nodes. This problem is solved in time O(dkn/T + d T ).
Furthermore, the problems of continuous extremum and summation are studied, in which all nodes receive inputs each round which then have to be aggregated and disseminated to all nodes for as many rounds as possible. For the extremum a delay of O(d n · M IS(n)) and an output rate of Ω( dn·MT IS(n)2 ) can be achieved. In case of the summation, for C = O(T /MIS(n)), a delay of O( dn^2 /(C·MIS(n))) and an output rate of Ω(TC / (dn^2)) are reached. For C = Ω(T /MIS(n)) a delay of O(dn^2/ T ) and an output rate of Ω(T^2/(dn^2 MIS(n))) are achieved. MIS(n) denotes the time required to compute a maximal independent set.

(Final presentation Bachelor's Thesis)

Steffen Knorr

May 10, 2017, 2:30 pm, F1.110

Global Illumination using Surfel Cone Tracing

One of the most challenging and computation intensive tasks in real time rendering is the calculation of indirect light. Crassin showed that using cones and voxels instead of traditional rays of light can significantly improve the rendering time. Furthermore, surfels have proven to be a good way of approximating objects within a scene. This thesis aims to combine those two ideas and investigate performance of a cone tracing algorithm for computing indirect light exposure for surfels. An algorithm will be developed and evaluated against already existing algorithms.

(Introductory presentation Master's Thesis)

Faisal Abu-Khzam
Lebanese American University, Beirut, Lebanon

April 3, 2017, 2:00 pm, F1.110

A Multivariate Complexity Analysis of Cluster Editing

The Cluster Editing problem seeks a transformation of a given undirected graph into a disjoint union of cliques via a (minimum) number of edge additions or deletions. A multi-parameterized version of the problem is studied, featuring a number of input parameters that bound the amount of both edge-additions and deletions per single vertex, as well as the size of a clique-cluster.
We briefly overview previous work on the problem and show that it remains NP-hard even when only one edge can be deleted and at most two edges can be added per vertex. However, the new formulation allows us to solve Cluster Editing (exactly) in polynomial time when the number of edge-edit operations per vertex is smaller than half the minimum cluster size. As a byproduct, we obtain a simple kernelization algorithm that delivers linear-size kernels when the two edge-edit bounds are fixed constants.
We further discuss some practical aspects of the multi-parameterized approach and the impact of adding more parameters, such as the number of outliers. Some open problems and new research directions are also discussed.