# Summer Term 2012

#### September 28, 2012, 2:00 pm, F0.530

On scheduling with multi-core and multi-speed processors using power down

This thesis considers a job scheduling problem where we are given a set of jobs, with release times and deadlines, as well as an speed-adjustable processor. This energy-aware perspective is important since prices for energy are increasing and the reducing of energy consumption is beneficial for the battery of portable devices. We regard the minimization objective where one has to minimize the total costs for scheduling jobs plus the value of rejected jobs. There are three famous extentions to enhance the realism of this model: (i) A sleep mode for every processor which can be used to minimize the spending of energy, (ii) Multi-Core processors and (iii) Values for every job together with the ability for the processor to reject jobs (a rejected job charges costs in the amount of the rejected job’s value).

This thesis connects (ii) and (iii), and obtains the same competitive factor as known for (iii). Moreover, this thesis considers an improved algorithm for (i). We show for the maximization objective (where the profit is the scheduling costs subtracted from the sum of all completed jobs subtracted) and the mainly used power function that no algorithm can have a constant competitive factor.

(Final presentation Bachelor Thesis)

#### September 19, 2012, 2:00 pm, F1.110

Local Swaps and outdated Information in Basic Network Creation Games

The bachelor thesis "Lokale Swaps und überholte Informationen in Basic Network Creation Games" introduces a variant of the Basic Network Creation Games with Communication Interests by Cord-Landwehr et. al. In this model, the network is created by the nodes who can swap the endpoints of incident edges so that it decreases their private costs.

The nodes will be restricted to local swaps, which means they can only swap an edge to an endpoint which is a neighbor of the former endpoint. It will be shown that this change has no influence on the worst case private costs of the nodes in an equilibrium and the price of anarchy of this game.

The second part will deal with the convergence speed of the game under different circumstances such as the used round model. The influences of outdated information of the nodes will also be investigated using simulations of the convergence process.

(Final presentation Bachelor Thesis, Talk in German)

#### September 19, 2012, 2:30 pm, F1.110

Adaptive Connection Strategies in dynamic Search Networks

In the bachelor thesis "Adaptive Verbindungsstrategien in dynamischen Suchnetzwerken" we introduce a model with a server that optimizes its connections in the network regarding an online stream of service requests. The server offers various services and maintains for each of them a connection to a so called interest group. The participant nodes are divided by their interest into these interest groups, which represent the various services.

The connection for each interest group can be changed with a swap-Operation, first introduced by Alon et al. The analysed deterministic and randomized algorithms perform swap-Operations to minimize the total costs, induced by the service requests. The cost for a swap-Operation to a participant node is equal to the distance to that node. For this model, we will show the competitive ratio of each algorithm in a static model and in an extended dynamic model. Dynamics are the change of edge costs and the interest change of participant nodes. Especially for the dynamic model we will present simulation results that investigate the influence of the changes to the competitive ratio under various simulation variants.

(Final Presentation Bachelor Thesis, Talk in German)

#### September 19, 2012, 3:00 pm, F1.110

A memory extended strategy for building short chains of mobile robots

In our world there are places that a human being cannot reach, for example because they are too dangerous. To explore these places anyway one can utilize robotic swarms. These swarms consist of many relatively simple robots operating as a whole analyzing the unreachable territory.

The bachelor thesis “A memory extended strategy for building short chains of mobile robots“ examines a strategy for robotic swarms organized as a chain. In this chain each robot can only communicate with its direct neighbors in the chain and the outermost robots are stationary. The strategy called
d-bounded GoToTheMiddle (d-GTM) is a turn-based algorithm sending each robot to its neighbors’ center to minimize the length of the chain between the two stationary robots. However, the robots are only allowed to move a distance of d in one round and have to stop when this distance is reached.

The d-GTM strategy does not use any knowledge of last rounds. That is the point where this bachelor thesis tries to improve the algorithm. By analyzing the angle of the movement direction of the last round the robots try to predict if they are moving into the right direction. With this information each robot sets its own d-distance to move further if the direction is similar.

By simulating and analyzing this modification one can observe an improvement when only a small number of robots is used. Unfortunately, there is a good case to believe that asymptotically both strategies need an equal number of rounds.

(Final Presentation Bachelor Thesis, Talk in German)

#### September 12, 2012, 2:00 pm, F1.110

Algorithm engineering of external path-finding algorithm in 3d-scenes

To compute a shortest path for robots in an 3d-Envirement, it's often necessary to provide the 3d-model of the surrounding itself and also the information which part of it is accessible by a certain robot. Graph-search algorithms are then used to find the shortest path based on these information.

In the bachelor-thesis "Dynamische 3D-Wegeplanung für industrielle Fertigungsanlagen" from G. Schaumann a discrete motion planning approach that is based on framed-rectangles, a derivative of framed-quadtrees for 3d-Scenes, is introduced that is able to determine the drivable-area solely from the 3d-model and robot specifications. The 3d-scene is first to be simplified and therefore voxelized and stored in an octree. After that the driving-area is constructed starting from the start-voxel of the robot and then expanding to neighborhood-voxel using the neighborhood-search of the octree, while considering the conquerable slope and size of the robot. The drivable voxels result in the framed-rectangles by basically dividing them in disjoint rectangles and extracting all voxel which are not part of the rectangle-borders.

As a consequence of this approach the octree and the set of all from the robot accessible voxel require a lot of memory in general and in comparison to the framed-rectangle representation of the driving-area. Since the memory requirement increase rapidly as bigger the 3d-scene gets or as better the voxel-resolution is, the main memory limits effectively the possible computable input. To circumvent these limitations external algorithms and data structures are introduced and subsequently evaluated in this bachelor-thesis.

(Talk in German, Final Presentation Bacherlor Thesis)

#### September 5, 2012, 2:00 pm, F1.110

An $O(\log k)$-competitive Algorithm for Generalized Caching

In the generalized caching problem, we have a set of pages and a cache of size $k$. Each page $p$ has a size $w_p\ge1$ and fetching cost $c_p$ for loading the page into the cache. At any point in time, the sum of the sizes of the pages stored in the cache cannot exceed $k$. The input consists of a sequence of page requests. If a page is not present in the cache at the time it is requested, it has to be loaded into the cache incurring a cost of $c_p$.

We give a randomized $O(\log k)$-competitive online algorithm for the generalized caching problem, improving the previous bound of $O(\log^2 k)$ by Bansal, Buchbinder, and Naor. This improved bound is tight and of the same order as the known bounds for the classic problem with uniform weights and sizes. We use the same LP based techniques as Bansal et al.\ but provide improved and slightly simplified methods for rounding fractional solutions online.

This is joint work with Anna Adamaszek, Artur Czumaj, and Matthias Englert.

#### July 18, 2012, 2:00 pm, F0.530

Rendering of complex animated scenes in real time

Todays simulations become more and more complex by using animated objects simulating machines and human behavior to acquire authentic simulation results. To overcome the complexity of such scene an algorithm is needed to reduce details without loosing important informations derived by the animation.

This master thesis focus on developing and evaluating such algorithm capable to utilize the reduction of an complex animated scene, but still producing an acceptable result without loosing any simulation information. Basis for this algorithm is the Randomized Sample Tree to determine the objects which has to been rendered. To overcomes the problematic of invalid cells, by transforming their elements out of their assigned cell, based on Skeletal Animation technique, a modification will be introduced. For evaluation the out coming results will be compared with standard rendering engine and a modified Level of Detail algorithm capable of handling animated objects within one scene.

#### July 18, 2012, 2:30 pm, F0.530

Design and evaluation of a rendering method for real-time walk-through systems of complex 3-D scenes

Today’s 3-D scenes are very complex and consisting of millions of Polygons. In order to test and present such scene, we need a real-time Walk-Through System. The one of the hard actual problems in the Computer graphics is the big size of data sent to the graphics card. Despite the development of modern graphics cards and their high performance, it still not possible with the present graphics cards to visualize such scene in real time. Therefore Rendering-technique and algorithms are developed and implemented to support the performance of the graphics cards. I present a new Rendering-technique to support the Visualization of complex scenes in the real-time Walk-Through Systems. The main focus of this work is to keep the size of data sent to the graphics card in acceptable small order and in the same time to get a high image quality. Based on these grounds the new Rendering-technique uses two Replacement-techniques “Relief board” and “Textured Depth Meshes”. The Replacement-techniques generate from complex objects simple objects with small amount of Polygons. Both of the Replacement-techniques will be evaluated with different models and compared with each other in order to analyze their properties. At the end Rendering- technique will be tested and evaluated with two complex scenes.

#### July 4, 2012, 2:00 pm, F1.110

Locality of Domination in Asymmetric Wireless Ad-Hoc Networks

Locality is a major concern in wireless ad-hoc networks where it is impractical for a wireless device to have knowledge about the entire network. Consequently, local algorithms in which each device is only aware of a part of the network within a constant (independent of the size of the network) number of hops away from it are critical for the deployment of such networks. Most previous work model a wireless ad-hoc network as an undirected unit disc graph which assumes all nodes have the same transmission range. However, in practice, nodes may have different transmission ranges due to differences in functionality, power control, and topology control. Motivated by this, this talk presents two dominating-set based problems for asymmetric wireless ad-hoc networks, namely, Strongly Connected Dominating Absorbent Set (SCDAS), and Directed Extended Dominating Set (DEDS).The question which arises in this context: Can we build local constant approximation algorithms for these problems?

#### June 20, 2012, 2:00 pm, F1.110

AdGame - an introduction to provider positioning in distributed markets

In a market stretched over several platforms, a provider is faced with the question of where he should position himself in order to maximize his income. Imagine a graph in which each node is a possible location for the provider. Agents, which represent the consumers, use the edges to travel between these locations. If an agent reaches a node on which the provider is located, the provider earns some reward. The behaviour of the agents is described by a random walk on the graph and in each round, new agents enter and old ones leave the system. Under these assumptions, the quality of a location is not fixed. The provider tries to compensate this by changing its location, which yields a fixed amount of costs. Based on this model, I am interesed in the following question: how good can an (online-)algorithm, which does not know the behaviour of the agents in advance, possibly be (in comparison to an optimal offline-algorithm)?
The talk gives an introduction to the model and shows the first steps towards answering the question above for simple graphs. It also deals with possible applications of game theory in this context.

#### June 20, 2012, 2:30 pm, F1.110

Facility Leasing

We consider an online facility location problem where clients arrive over time and their demands have to be served by opening facilities and assigning the clients to opened facilities.When opening a facility we must choose one of $K$ different lease types to use. A lease type $k$ has a certain lease length $l_k$. Opening a facility $i$ using lease type $k$ causes a cost of $f_i^k$ and ensures that $i$ is open for the next $l_k$ time steps. In addition to costs for opening facilities, we have to take connection costs $c_{ij}$ into account when assigning a client $j$ to facility $i$. We develop and analyze the first online algorithm for this problem that has a time-independent competitive factor.

This variant of the online facility location problem was introduced by Nagarajan in 2008 and is strongly related to both the online facility problem by Meyerson and the parking permit problem also by Meyerson. We present an $\bigO{\lmax\log(\lmax)}$-competitive algorithmwhere $\lmax$ denotes the maximum lease length. Moreover, we prove that a factor of $\bigO{\log2(\lmax)}$ can be achived or many natural'' cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in \lmax.

#### June 13, 2012, 2:00 pm, F1.110

Analysis and simulation of energy-efficient online-scheduling algorithms

The energy usage of microprocessors will become more and more every year. The reason for this is, that the speed and the number of microprocessors which are used grows up every year. An advantage of modern microprocessors is, that they can change their processor speed and thus the energy usage can be changed. It is beneficial to calculate with a slow processor speed, because the energy usage grows faster than the processor speed. But it is important that a calculation is finished before it will be used.

This bachelor thesis presents algorithms that minimize the energy usage. At first a general model is developed, which is used for the algorithms. Then the algorithms are theoretically analysed and with a few examples compared. To the first model a cost model is added. This model could represent the income of a datacenter, which occurs by finishing a job. With this model algorithms are shown which minimize the energy usage and look at the income of the job to maximize the profit.

(Final presentation Bachelor thesis)

#### May 30, 2012, 2:00 pm, F1.110

An Easy Extendable Modeling Framework for Discrete Event Simulation Models and their Visualization

Today’s simulation software covers a variety of dif- ferent problem areas, e.g., production, transportation, logistics, and physics. Therefore the manufacturers of simulation software use a generic simulation engine equipped with problem specific libraries to offer spe- cialized products. These products are supplemented by user-programmed models and building blocks. However, the combination of components from different problem areas often lead to software architectural problems in terms of extensibility and reusability. Especially for the simulation area, we offer an architectural concept that provides the ability to simply combine components from different problem areas and the ability to make future extensions. Classic user interfaces are unable to handle this flexibility in simulation modeling. Therefore, we of- fer an architectural concept connecting simulation data with its visualization. We implemented our concepts in a prototype system and demonstrate them on the basis of the two problem areas “Production” and “Logistics”.

#### May 23, 2012, 2:00 pm, F1.110

Egoism in Virtual Overlay Networks

Modern distributed search networks make use of acquaintance between peers to speed up searches. Those acquaintances can be modeled as a virtual network/overlay over the actual communication network. In several scenarios it is a valid assumption that those peers act selfishly and only optimize their acquaintances according to their private search interests. In this talk I will give a short introduction into the field of network creation games and how they can be used to model egoism in search networks. Further, I will present a recent research result on the cost of egoism in those networks.

Network creation games model the creation and usage costs of networks formed by a set of selfish peers. In this talk, I will introduce a generalized version of the basic network creation game (Alon et al., SPAA 2010), in which each peer selfishly may replace one of its incident links by a link to an arbitrary peer. In doing so, each peer follows its private objective to minimize the maximal distances to the other peers it is interested in. Given peers with interests and a communication network forming a tree, I will present results regarding the structure and quality of equilibria: a worst case private cost upper bound of O(\sqrt{n}) as well as a tight bound for the Price of Anarchy of \Theta(\sqrt{n}) for every equilibrium with n peers.

#### May 16, 2012, 2:00 pm, F1.110

Energy-aware Scheduling - Take a Break and Hold Your Competitive Factor Tight

I present two recent research results. Both consider an energy-aware scheduling scenario where tasks of a certain size and value arrive over time. If a task is not finished until its deadline, we have to pay a penalty equal to its value. Our scheduler decides which job to process at any time and how much energy to invest. The more energy invested, the faster it is finished. The cost of a schedule is the sum of penalty and invested energy.

The first result combines two known energy-saving techniques (speed scaling and sleep states). We show that this combination introduces a new difficulty. Especially, standard scheduling techniques no longer yield a constant competitiveness, but can become arbitrarily bad. We tackle the reason for this decrease, showing that a job's value density (ratio between value and work) is a crucial parameter. While we are not yet able to provide more elaborate algorithms that do not suffer from this drawback, we analyze a combination of two standard algorithms and prove a nearly optimal competitive factor depending on the maximum value density.

The second result drops the sleep state from the model. For this case, Chan, Lam, and Li (2010) presented a scheduling strategy that is $\alpha^{\alpha}+2e\alpha$-competitive, where $\alpha$ is a constant determining the power consumption of the processor. Their analysis is based on a standard technique involving a sophisticated potential function. I present a new analysis of (essentially) the same algorithm yielding a tight competitiveness of $\alpha^{\alpha}$. The analysis is based on duality theory for convex programs and not only yields a much simpler proof, but also this improved, best possible competitive factor. Moreover, in contrast to the potential function based analysis, my technique may allow a relatively easy generalization to the multi-processor case.

#### May 9, 2012, 2:00 pm, F1.110

Applied 3D computer graphics - project status report

In order to visualize and interact with complex, three-dimensional objects in a virtual environment, excessive manual preparations of the data may be necessary. While the visualization of complex data for itself can be a challenging task already, the creation of animations, or the simulation of behavior of the virtual objects usually requires expert knowledge. Supporting the user with new tools and interaction techniques helps to reduce this obstacle when using 3D computer graphics in an industrial context.

In this talk, I am going to present the status of current project plans in which we are planning to work in the described context.

#### April 25, 2012, 2:00 pm, F1.110

Offering Dynamics to Emerging Structures

During the past year, the project group NODES focused on local strategies in dynamic networks. We chose four different problems in which participants of a network have to act in a local way and need to cope with dynamics caused by adversaries, so called external dynamics. Modifications initiated by the participants themselves are referred to as controlled dynamics. We analyzed a new variant of the facility location problem in which facilities are allowed to move and clients dynamically change their demand. We compared an online algorithm to the optimal offline algorithm and were able to proof a competitive ratio of sqrt(2)+1. Another problem dealt with rearranging nodes on a line and in a tree such that frequently communicating nodes are close together. For the line, we introduced several classes of algorithms and analyzed their powerfulness. For the tree, we provide some experimental results. Another study aimed at a self-stabilizing topology which provides short distances between nodes and a constant degree. Furthermore, we evaluated different ways to introduce external and controlled dynamics to the weak coloring problem.

#### April 11, 2012, 2:00 pm, F1.110

Capabilities and Limitations of Distributed Computational Models for Highly-Dynamic Networks

A dynamic network is modeled as a dynamic graph G_r=(V,E_r), where V is a static set of nodes and E_r is the set of links in round r. Nodes have IDs and, in the worst case, the topology may change arbitrarily from round to round as long as the network stays connected in every round. A node may broadcast a message in round r which is delivered to its neighbors in round r+1. This talk gives a short introduction to the work by Kuhn et al. on fundamental problems in this model such as counting, k-verification, k-token dissemination, and all-to-all token dissemination.

Starting from this, the goal of my Ph.D. project is to understand, model, and classify different, more specific types of dynamics. A classification could be based on the complexity of performing benchmark tasks like all-to-all token dissemination. This talk gives an overview about problems and questions I want to analyze in my research project.