Aktuell:
Wintersemester 2013/14
Sascha Brandt
March 26, 2014, 2:00 pm, F1.110
An Out-of-Core Algorithm for Rendering large dynamic Scenes
The rendering of large complex three-dimensional scenes is a famous problem in computer graphics. It is applicable to many fields, including architecture, CAD, and games. The challenge is to render 3d data that does not fit into main memory of modern computers. Also, it is often desirable to dynamically alter the layout of a scene, e.g. for visual planning of a factory together with clients. The purpose of this master's thesis is to develop a so called out-of-core algorithm to render such highly complex, large and dynamic 3d scenes in real time.
(Introductory presentation master's thesis, talk in German)
Tamás Lukovszki
Eötvös Loránd University
March 26, 2014, 2:30 pm, F1.110
Balanced Neighbor Selection for BitTorrent-Like Networks
We introduce a new neighbor selection strategy for tracker-based peer-to-peer systems, like BitTorrent. Our method is based on a balanced multiple choice algorithm, which takes into account not only the current load of a peer, but the possibility as well, that it will be selected in the future. We analyze the properties of the constructed overlay topology theoretically. We prove that the maximum degree in the constructed graph is O(1) while the diameter is O(ln n), with high probability, where n is the number of nodes. Considering a randomized upload policy, we prove that the distribution of b blocks on the overlay generated by our neighbor selection strategy takes O(b + ln n) rounds, with high probability, which is optimal up to a constant factor. It improves the previous upper bound of O(b + (ln n)^2) by Arthur and Panigrahy (SODA?06). In order to adapt our algorithm in real BitTorrent networks, only a slight modification of the tracker is necessary without any change in the clients. Besides theoretical analysis, thorough simulations have been done to validate our algorithm and demonstrate its applicability in the BitTorrent network.
Stanislaw Eppinger
March 12, 2014, 2:00 pm, F0.530
Developing a modular Raytracer
Developing applications is a time-consuming process. Developers often focus on reusability of the code to shorten that process for future projects. To actually preserve reusability the vast field of software engineering offers a lot of approaches and methods to do so. Nevertheless it does not offer a way to automatically generate new applications from existing code according to the demands of a user.In this work a theoretical system will be presented which is capable of generating applications out of runnable, compiled modules and satisfying the users requirements regarding the application at the same time. To do so a new approach will be introduced to represent modular, decomposed applications in a formalised way. The goal of such modular software is to easily exchange functions, parts of the software, adding new functionalities or even create new different software out of it.As an example of such a decomposed application a modular Raytracer has been developed which is only composed of compiled modules which can be exchanged easily.
(final presentation Bachelor's thesis, talk in German)
Peter Kling
February 19, 2014, 2:00 pm, F1.110
Trading Energy for Responsiveness
This talk presents results recently published at STACS 2014. We consider a speed scaling model in which we are to schedule jobs of different sizes and importance on a single processor. Our objective is to minimize a linear combination of the weighted (fractional) response time and the energy.There exist many approximative results on this problem (both offline and online), but we’re investigating the complexity of finding an optimal offline schedule; one of the most fundamental questions in the speed scaling area. To approach this question, we consider a variant that is relaxed in two respects: the objective is fractional (instead of integral) response time and the processor has a discrete set of allowed speeds (in contrast to a processor that is allowed to run at arbitrary speeds). In the talk, I show how we can use a primal-dual formulation of the problem to extract structural properties of optimal solutions. These properties can be interpreted in a geometric manner, which helps us to derive an efficient algorithm to compute optimal schedules for this relaxed problem variant.
Christine Markarian
February 12, 2014, 2:00 pm, F1.110
Online Set Cover Problem
We study a variant of online set cover where elements in U, |U| = n, arrive over time and must be covered by sets in a family F of subsets of U, |F| = m. Each set can be leased for K different periods of time. Leasing a set S for a period k incurs a cost of ckS and allows S to cover its elements for the next lk time steps. The objective is to minimize the total cost of the sets leased, such that at any time t, an element arrives, it is covered by a set which contains it and is leased at time t. This problem was introduced by Anthony and Gupta who gave an O(log n)-approximation for the problem in the offline setting. This is the best possible unless P = NP. In the online setting, for a special case of K = 1 such that there is only an infinite lease period, Alon et al. gave a deterministic O(log n logm)-competitive algorithm for the problem. In this talk, I present a deterministic O((K log ?) = O(K log n))-competitive primal-dual algorithm for the general case, where ? is the maximum cardinality of the sets. For the special case of K = 1, this is optimal, if P = NP, and improves the O(log n logm)-competitive factor of Alon et al.
David Pahl
February 12, 2014, 3:30 pm, F1.110
Reputation mechanisms for service compositions
A modified model of a buyer-seller auction with one buyer and n sellers is analyzed empirically respective existing Nash-Equilibria by using a self-implemented simulator. The selection of a product in every round of the corresponding strategic game is made by the buyer, who uses the reputation mechanism to minimize his moral hazard. Existing Nash-Equilibria are tested for their behavior considering different input parameters. The result of this analysis shows that the character of the Nash-Equlibria follows a certain scheme.
(final presentation Bachelor's thesis)
Martin Ziegler
Technische Universität Darmstadt
January 22, 2014, 2:00 pm, F1.110
Introduction to Real Complexity Theory: Numerical Approaches to P-vs-NP and beyond
Maximizing discrete (e.g. Boolean or linear integer) functions yields standard NP-complete problems; whereas smooth functions on the real line admit efficient algorithms such as Newton's method -- and are thus considered easy to maximize. However Harvey Friedman and Ker-I Ko have demonstrated (1982) that also this problem is 'complete' for the P/NP question. We present their 'reduction' and introduce to the framework of real complexity theory that has recently evolved around it.
Edita Rollova
Department of Mathematics, Faculty of Applied Sciences, University of West Bohemia
January 15, 2014, 2:00 pm, F1.110
Circuit covers of signed graphs
A circuit cover of a graph G is a collection of circuits of G such that every edge of G is contained in at least one of these circuits. In the talk we define a circuit cover of signed graphs, graphs where each edge is either positive or negative. We state an upper bound for a length of a shortest circuit cover of signed graphs.
Andreas Cord-Landwehr
January 15, 2014, 3:00 pm, F1.110
Edge Pricing in Network Creation Games
Network creation games model the creation and usage costs of networks formed by selfish peers. Each peer can provide a set of edges to the network in order to minimize its private cost. Typically, these costs are modeled by the maximum or average distance in the network plus the costs incurred by maintaining edges. We consider generalized versions of the widely adopted SUM-game (Fabrikant et al., PODC 2003) and MAX-game (Demaine et al., PODC 2007), incorporating quality of service aspects. In our games, peers can create edges of arbitrary weights, instead of all edges having a weight of one. The maintenance costs of an edge are then given by a monotonously decreasing cost function., i.e., lighter edges are more expensive than heavier ones. We provide upper bounds for the price of anarchy in both games. For the SUM-game, the bound can be shown to be nearly tight for superlinearly decreasing price functions. Also in the SUM-game, the price of stability is either constant or (for corner cases) depends on the range of available edge weights.
Julian Meschede
December 18, 2013, 11:00 am, F1.110
Bandwidth-limited scheduling with scalable job requirements in multiprocessor environments
While there already exists a broad field of research to the resource constrained scheduling problem, this presentation introduces a new approach to this problem with regards to a new model. In this new model the bandwidth for memory access is considered the bottleneck of a system consisting of two processors. Moreover, a discrete time model is regarded where at discrete points in time the allocation of bandwidth can be adjusted. This means the amount of bandwidth both processors have at disposal can be allocated arbitrary if the sum of both amounts does not exceed the overall available bandwidth. In contrary to previous models (to the best of my knowledge) the amount of resource needed to start a job does not have to be equal to the full requirement of this job. Under the restriction that each job?s bandwidth requirement is smaller than the maximum amount of bandwidth available at any time I will introduce a polynomial algorithm for this model. Furthermore I will show an attempt to adapt this algorithm to the scheduling problem without the bandwidth requirement restricition.
(final presentation Bachelor's thesis, talk in German)
Pavel Podlipyan
December 11, 2013, 2:00 pm, F1.110
Blind robots or new connectivity models for robot coordination problems
It is relatively expensive to equip every autonomous mobile robot with computer vision systems, since computational powers, energy are limited and number of robots is big. Inspired by Kilobot Project [1] we would like to present new connectivity model for the group of autonomous robots. Computer vision gives to the robots knowledge of positions and distances of their neighbors. What is going to happen if we are going to replace vision system by a few photo detectors? And will the blinded robots still be able to solve basic coordination problems? These questions are going to be discussed and according theoretical model is going to be presented.
[1] M. Rubenstein, N. Hoff, R. Nagpal, Kilobot: A Low Cost Scalable Robot System for Collective Behaviors, IEEE Spectrum: robotics, June 16, 2011
Johannes Schaefer
December 10, 2013, 4:30 pm, F1.110
Movement Behavior Analysis of Robot Swarms with Leaders
Like a swarm of ants is capable of carrying out difficult tasks with a single ant having only small capabilities, swarms of simple robots shall be used for tasks that usually require high coordination effords. We use swarms of uncommunicative and oblivious robots together with few better equipped leader robots. This swarms can perform tasks like gathering around the leaders or moving along a trajectory without the swarm breaking apart. We give an overview of our theoretical results, including speed calculations for certain formations, as well as empirical results about the time needed for swarms to accomplish selected tasks.
(final presentation Bachelor's thesis)
Olaf Bonorden
McAfee Development Center
November 29, 2013, 4:15 pm, F1.110
Working at McAfee
The talk will start with an introduction of McAfee and the office in Paderborn. Next part is about the main product, the McAfee Web Gateway, its features and architecture to show that there are many more challenges than just removing advertisement from Web pages. After a short preview of the next generation of Web security gateways, the last part covers a different topic: Privacy. The proposal shows how anonymous but "controlled" internet access is possible by using a joint hash function: two parties calculates h(S_a,S_b) without leaking any knowledge about S_a resp. S_b.
Sören Riechers
November 27, 2013, 2:00 pm, F1.110
Shared bandwidth scheduling
In modern compute centers, there are thousands of processors. These processors (or subsets of them) often share a common resource such as memory or data connection. Sometimes, this common resource can become the bottleneck of the whole system, especially if data throughput is high. In my talk I introduce a new model considering a multiprocessor environment with identical, fixed-speed processors sharing a common resource or bandwidth. The bandwidth is the bottleneck of the system, i. e. if a job has to be slowed down, this is due to the required bandwidth not provided in some time step (we consider the problem as a discrete model). Processing only parts of a job in one time step is allowed - this is the new aspect in comparison to previous models. I present basic strategies and ideas to solve the problem and future modifications that could be interesting. Additionally, I give a deeper insight into the problem by sketching some basic proof ideas, aiming on a question we are not yet able to answer: Is a specific part of the problem polynomially solvable or can we prove it to be NP-hard?
Alexander Mäcker
November 20, 2013, 3:00 pm, F1.110
Greedy Network Creation With Heavy And Light Edges
Network creation games are widely studied to model the emergence of networks of selfishly acting nodes. In these games it is often assumed that nodes can create undirected, unweighted edges for the price of $\alpha>0$ in order to minimize their cost for own edges plus the average (SUM-version) or maximum (MAX-version) distance to the other nodes in the network.
In this thesis we introduce and analyze a new model based on weighted graphs in which there are different edge types providing different qualities depending on their prices. Particularly, nodes can choose between an edge for the price of $\alpha$ with weight 1 and one for the price of 1 with a weight of $\beta$, $\beta >=\alpha$. We study the Nash equilibria for both versions and show bounds for the Price of Anarchy in order to measure the loss of efficiency due to the selfish behavior. Since the common model of perfectly rational players might be unrealistic as playing optimally is computationally hard, we also study the impact of restricting players to greedy moves by proving bounds for the Price of Anarchy and the approximation quality of greedy equilibria (Nash equilibria of these specific, polynomial-time bounded greedy players).
(Final presentation master's thesis)
Daniel Jung
November 13, 2013, 2:00 pm, F1.110
Gathering a closed chain of stupid, point-shaped agents
Assume we have a crowd of indistinguishable, point-shaped mobile agents, distributed on the plane. Each of these agents is connected to exactly two neighboring agents this way, that they form a closed chain (self intersections are allowed).
The agents do not have any global knowledge, especially just a constant viewing range. At the moment, for easier analysis, we allow some limited features like counting rounds and limited communication, which might both possibly be removed in the future.
In my model, i restrict possible positions of the agents to Z^2. I'm currently developing an algorithm that can gather this given chain withing a linear number of timesteps.
Manuel Malatyali
November 13, 2013, 2:30 pm, F1.110
Big Data in distributed Environments
We consider the model of Cormode dealing with Big Data problems in a distributed fashion. In this model there are n nodes getting a datastream as input and one coordinator, which has to compute a function over the union of all datastreams. The nodes can communicate directly to the coordinator and vice versa, but not directly to other nodes. The goal is to minimize the amount of total communication in order to calculate the function.
Cormode et al. have shown tight bounds for the countdown problem, e.g. raising an alarm if the number of events in the union of the datastream has reached a given threshold. In this talk we look at an environment in which events are allowed to disappear and the coordinator still has to raise an alarm if the threshold is exceeded.
Markus Benter
November 6, 2013, 2:00 pm, F1.110
Optimizing Multi-Keyword Search in Peer-To-Peer Networks by Exploiting Query Behavior
One of the most important kind of queries in peer-to-peer networks are multi-keyword queries. In a multi-keyword query, m strings are specified. The task is to find all documents in which each of the m strings appears at least once. In practice, it can be observed that some keywords are more popular than other keywords. This query popularity can be modeled by cumulative distribution function. The talk is about how to exploit cumulative distribution functions to optimize peer-to-peer networks for multi-keyword search.
Max Drees
October 30, 2013, 2:00 pm, F1.110
Budget Coverage Problem
In the Budget Coverage Problem (BCP), we are dealing with a set of clients and a set of facilities. Every client has a finite budget and opening a facility yields a part of this budget for every client, based on their distance. Naturally, no client can give more than its budget and we are only allowed to open at most k facilities. This defines an optimization problem, which is NP-hart. In this talk, the BCP is introduced and a first affiliation to related problems is given. The second half of the presentation deals with a game based on BCP in which various players compete against each other in gaining as much of the available budget as possible. We take a look at different possibilities off how to split the produced payoff between the players and how they affect question of a Nash equilibrium.
Martin Dietzfelbinger
Technische Universität Ilmenau
October 16, 2013, 2:00 pm, F1.110
Tight Lower Bounds for Greedy Routing in Higher-Dimensional Small-World Grids
We consider Kleinberg's celebrated small world graph model, in which a D-dimensional grid $\{0,\dots,n-1\}^D$ is augmented with a constant number of additional unidirectional edges leaving each node. These long range edges are determined independently at random according to a probability distribution (the augmenting distribution), which is the same for each node. Kleinberg suggested using the inverse D-th power distribution, in which node v is the long range contact of node u with a probability proportional to $\|u-v\|_2^{-D}$. He showed that this makes it possible to route messages efficiently by the greedy strategy: always move the message over a link that brings it closest to the target w.r.t. the Manhattan distance. The expected length of the path found is O((log n)^2).
We prove that for fixed D>=2 greedy routing does not perform asymptotically better for any uniform and isotropic augmenting distribution, i.e., the probability that node u has a particular long range contact v only depends on $\|{u-v}\|_2$. (Before this was known only for D=1.)
For the proof we define a so-called budget game, in which a token travels over a one-dimensional game board from one end to the other, while the player manages a "probability budget". In each round, the player "bets" part of her remaining probability budget on step sizes. The token makes progress by a step size chosen at random according to the player's bet, and depending on this progress, some of the player's bet is removed from her probability budget. We prove a tight lower bound on the expected number of rounds needed in this game. The lower bound for greedy routing in the D-dimensional grid follows by a reduction.
(joint work with Philipp Woelfel, University of Calgary)