Sommersemester 2011

Manuel Hüster, University of Paderborn

September 28, 2011, 2:00 p.m., F1.110

Effizienter Wegsuchalgorithmus in mehrstöckigen dreidimensionalen Gebäuden

Wegsuchalgorithmen spielen in der heutigen Zeit in vielen verschiedenen Anwendungsbereichen eine elementare Rolle. Dabei wird für die Berechnung der kürzesten Verbindung zwischen zwei Knoten eine optimistische Schätzung verwendet. Es gibt unterschiedliche Schätzfunktionen, die den Effizienzanforderungen innerhalb von zweidimensionalen Umgebungen genügen, wie die euklidische Distanz. Es gab jedoch für mehrstöckige und dreidimensionale Gebäude keine analog funktionierende Schätzfunktion.

Folglich wurde in der Masterarbeit ein Ansatz entwickelt, der eine optimistische Schätzung aus einer abstrahierten Darstellung berechnet. Dabei wurden verschiedene Realisierungen verwendet oder entwickelt und sowohl direkt als auch mit der euklidischen Distanz verglichen.

Die Schätzungen werden von einem A* - Algorithmus verwendet, wobei sich bei Verwendung der Schätzungen aus der Abstraktion eine effizientere Ausführung bei mehrstöckigen und dreidimensionalen Gebäuden zeigte, als bei Verwendung der euklidischen Distanz.

(Abschlussvortrag Masterarbeit)

Philipp Brandes, University of Paderborn

September 14, 2011, 9:00 a.m., F1.110

Robust Distributed Computation in Dynamic Networks

Consider a distributed computation in a dynamic network with n nodes. We allow the topology of this network to be changed by an adversary in each round but a connected graph must always be maintained. The nodes communicate synchronously via broadcasting to their neighbors in every round. An algorithm for this setting has been presented which is able to correctly count the number of nodes in O(n2).

In this master's thesis we extend the research in that area. We analyze the robustness of the algorithm. The edges in the original model are considered to be reliable. But what happens if errors can occur? To analyze this, a common failure model is added on top of the proposed model. We assume randomized edge failure and examine the consequences on the algorithm and how to adapt it such that it outputs the correct count with high probability. Furthermore, we analyze the robustness of the algorithm if a dynamic set of nodes is introduced and propose changes such that the counting process continues to output a valid value. In addition to the robustness analysis, we briefly explore different network properties which might be used to reduce the run-time of the counting algorithm.

(Final Presentation Master Thesis)

Sven Kurras, University of Paderborn

August 31, 2011, 2:00 p.m., F1.110

Distributed Sampling of Regular Graphs

Generating random graphs, called graph sampling, is used as a tool in various applications as well as of its own theoretical interest. In the context of Peer-to-Peer networks, one is interested in sampling d-regular graphs that provide high connectivity and low diameter while being robust against node failures and adversarial attacks. Distributed strategies are preferred, because any presence of a central server might become a bottleneck and makes such a network vulnerable. Since random graphs do provide the above properties with high probability, a promising distributed strategy is to constantly randomize the connections between the nodes by a simple local operation, aiming to achieve a global structure that is sufficiently close to random. Therefore, one needs to quantify the speed of convergence towards the long-term random distribution, denoted as the mixing time of the underlying Markov chain towards its stationary distribution on all d-regular graphs.

An example for such an operation is k-Flip, where the outer edges of a random (k+2)-walk are interleaved. For k=1, high-polynomial upper-bounds for the mixing time are known, while it is supposed that for n-node graphs O(n log n) should hold. For k>1, the only known bound requires k to be very large, k ∈ Θ(d² n² log 1/ε), which contradicts the locality.

In a first part, this master's thesis investigates the state of the art in distributed sampling of regular graphs. The main focus lies on gaining deep insights into the problem structure as well as on working out the advanced proof techniques, such as canonical paths, Markov chain coupling, graph spectra, and others. So prepared, in a second part, an attempt should be made to find some valuable result for one of the involved problems, for example some mixing time upper bound for k ∈ Θ(log n).

(Introductory Presentation Master Thesis)

Holger Schmeisky, University of Paderborn

August 10, 2011, 2:00 p.m., F1.110

Approximate String Matching (Hamming distance)

Approximate string matching under Hamming distance is a classic problem of computer science. Hamming distance counts the number of mismatching characters between two strings. Applications in bioinformatics need algorithms that are fast for small alphabets. In recent years several average-optimal algorithms were presented and their performance on real inputs is compared in this thesis.

(Final Presentation Bachelor Thesis)

Kathrin Bujna, University of Paderborn

August 3, 2011, 2:00 p.m., F1.110

Learning and Analysing Local Strategies for Self-Organizing Robotic Exploration Teams

Zwischen zwei festen Basisstationen bilden bewegliche Relay-Roboter eine beliebig geformte Kommunikationskette. Jedes Relay kennt nur seine zwei direkten Nachbarn in der Kette, die sich jeweils in seinem Sichtradius befinden müssen. Ziel ist es, die Kette soweit wie möglich zu kürzen. Das Problem dabei ist, dass sich die Relays bei ihren Bewegungen jeweils nur an den relativen Positionen ihrer Nachbarn orientieren können. Es existieren bereits lokale Algorithmen für dieses Problem, wie beispielsweise die Hopper und die Move-on-Bisector Strategie.

In der Masterarbeit soll untersucht werden, ob diese Strategien verbessert werden können. Dazu werden wir zum einen betrachten, was passiert, wenn wir die gegebenen Strategien nur leicht abändern. Zum anderen werden wir untersuchen, inwieweit es nützlich ist, die Fähigkeiten der Relays zu erweitern.

(Antrittsvortrag Masterarbeit)

Andreas Cord-Landwehr, University of Paderborn

July 13, 2011, 2:00 p.m., F1.110

Distributed Search in Huge Networks

We consider networks consisting of clients, servers, and service providers. The servers form the market places at which service providers can offer and where clients search for best fitting services. A client's search query consists of a parametrized evaluation function that states how good a discovered service fits the query. As the network is huge and nodes may leave or join, we are interested in local search strategies for the clients. Contrary to most current literature, in our model it is not possible to organize the services (i.e., the service providers) a priori by a given ordered data structure.

In this talk I present two research approaches that we will examine in this scenario. One approach concentrates on the question on how to locally modify the network (according to recent search requests) to improve costs for future searches. The second approach focuses at the service provider's view: we ask how to advertise the services of a specific service provider best to a group of clients to maximize the gain of the service provider.

The presented research questions are embedded into the upcoming SFB 901 "On-The-Fly Computing". In this collaborative research project we ask the question on how to establish a highly dynamic market of IT services of many participants. A key question there is how to allow efficient searching for IT services. Characteristics of the On-The-Fly market include that every participant can also be a service provider by itself, that participants can join and leave the market, and that participants search for best fitting services (specified by evaluation functions).

Peter Kling, University of Paderborn

July 6, 2011, 2:00 p.m., F1.110

Scheduling Techniques for Large Scale Parallel Computing Systems

Parallelization and concurrency have become standard in today's computing systems. Whether you consider the massive parallelization provided by modern graphic adapters to enable photo-realistic real-time rendering, high performance clusters as those run by Google, Amazon, and Co. to offer large amounts of computational power on demand, or even Maple running on your personal laptop exploiting the parallelism of the build-in quad-core. Without carefully designed parallel architectures as well as sophisticated techniques to utilize the computational resources, these scenarios would hardly be conceivable.

This leads us to the concept of “On-The-Fly Computing” as promoted by the upcoming SFB 901.

It envisions a nearly fully automated market for many different kinds of IT services, including software development, configuration of software based on smaller components, and computational power for a timely execution of configured software services. One key concept of this vision are the so called “On-The-Fly Compute Centers”: a term referring essentially to improved variants of today's compute centers. To cope with the expected computational demand and its timely allocation, such centers must make efficient use of all available, possibly heterogeneous, computational resources.

In my talk, I will survey the current state of scheduling techniques suitable for such large scale computing systems as well as ideas for future research directions. One focus will lie on energy-aware scheduling, as energy has become *the* limiting cost factor for modern compute centers. We will discuss ideas for profitable energy-aware scheduling techniques, i.e., schedulers that incorporate additional Quality of Service constraints into the scheduling decisions.

Fei Teng, University of Paderborn

June 29, 2011, 2:00 p.m., F1.110

Image Error Evaluation

Im Bereich der Computergrafik möchte man oft die Qualität eines Bildes oder Videos bestimmen. Die Bildqualität kann durch einen Vergleichen zwischen dem fehlerhaften Bild und dem ursprünglichen Bild berechnet werden. Forschungen in diesem Bereich beschäftigen sich meistens mit Fotos, während wir uns für die Qualität gerenderter Bilder interessieren. Fehler in gerenderten Bildern (Approximation) haben jedoch eine andere Charakteristik als solche in Fotos (Kompression).

Hierzu wurde ein Echtzeit Vergleichsprogramm für Bilder entwickelt. Es beinhaltet Vergleichsverfahren aus Forschungsarbeiten, die für Fotos gut funktionieren (z.B. eine Kombination der Unterschiede von Leuchtdichte, Kontrast und Struktur). Auch einige andere Verfahren wurden implementiert, welche versuchen gerenderte Bilder zu behandeln, so kann z.B. das Image Pyramid Verfahren den Konzentrationsgrad des Fehlers bestimmen.

Testergebnisse haben gezeigt, das keines der Verfahren in der Lage ist Bilder mit unterschiedlichen Arten von Fehlern (rauschen / blur) fair zu vergleichen. Unter implementierten Verfahren ist Structural Similarity das Beste für Fotos, und Image Pyramid das Beste für gerenderte Bilder.

(Abschlussvortrag Bachelorarbeit)

Jörg Meier, University of Paderborn

June 29, 2011, 2:30 p.m., F1.110

A local strategy for maintaining a short chain of robots between a base station and a mobile explorer

Wir betrachten ein Gelände ohne Hindernisse indem eine Kommunikationskette von n Relays eine statische Basisstation und einen frei beweglichen Explorer verbindet. Dabei ist es dem Explorer erlaubt sich mit einer höheren Geschwindigkeit zu bewegen als die Maximalgeschwindigkeit eines Relays. Da sich der Explorer bewegt, kann sich die Anzahl der Relays mit der Zeit ändern. Wir untersuchen eine lokale Strategie, welche die Kommunikation zwischen den Endpunkten der Kette aufrechterhält und dabei die Kette kürzt, in der Kombination mit zwei intuitiven Strategien um neue Relays einzufügen.

In einer theoretischen Analyse konnten wir Grenzen für die Geschwindigkeit des Explorers in allgemeinen Szenarien aufzeigen und maximale Rundenanzahlen angeben, bis ein neues Relay eingefügt werden kann. Außerdem wurde die Effizienz dieser Strategien durch die Analyse von Experimenten beurteilt und verglichen.

(Abschlussvortrag Bachelorarbeit)

Tobias Tscheuschner, University of Paderborn

June 22, 2011, 2:00 p.m., F1.110

The complexity of local max-cut

Local search is the most effective approach for dealing with hard optimization problems. In local search, one assigns a set of neighbor solutions to every solution and asks for a solution that has no better neighbor solutions, i. e. a local optimum. The neighborhood relation between the solutions naturally induces so called 'standard' algorithms that find a local optimum: begin with a feasible solution and iteratively move to a better neighbor until a local optimum is found. Many empirical and theoretical investigations have shown that local search methods quickly comverge to a local optimum for most instances.

In this talk we study the problem of computing a local optimum for Max-Cut with FLIP-neighborhood, in which exactly one node changes the partition. In particular, we consider three results. First, there are graphs of maximum degree four and initial solutions for these graphs for which every standard algorithm, independent of the pivot rule that choses among the improving solutions, requires exponentially many steps to reach a local optimum. Second, there are graphs of maximum degree four and initial solutions for which finding a local optimum reachable from the initial solution via a standard algorithm is PSPACE-complete. Third, finding a local optimum is PLS-complete for graphs with maximum degree five.

(Research Seminar Workgroup Scheideler )

Robert Gmyr, University of Paderborn

June 15, 2011, 2:00 p.m., F1.110

Berücksichtigung von Benutzerverhalten bei 3D-Rendering

Das Benutzerverhalten kann einen erheblichen Einfluss auf die Performanz von Rendering-Verfahren haben. So hängt zum Beispiel die Performanz eines Out-Of-Core-Verfahrens maßgeblich von der Menge der Objekte ab, die zu einem gewissen Zeitpunkt im Hauptspeicher vorgehalten werden. Wird diese Menge ungünstig gewählt, so bricht die Bildrate ein, da Objekte aus dem sekundären Speicher nachgeladen werden müssen. Durch die Berücksichtigung des Benutzerverhaltens kann dies jedoch verhindert werden: Ein Benutzer wird Objekte, die er in der Vergangenheit häufig betrachtet hat, wahrscheinlich auch in Zukunft häufig betrachten; solche Objekte sollten also im Hauptspeicher abgelegt werden, um die Performanz des Rendering-Verfahrens zu verbessern.

Im Rahmen dieser Arbeit soll eine Methode zur Berücksichtigung von Benutzerverhalten bei 3D-Rendering entwickelt werden. Die Methode soll eine Gewichtung der Objekte einer Szene anhand des Benutzerverhaltens vornehmen. Diese Gewichtung soll dazu genutzt werden können, die Bildrate oder die Bildqualität von Rendering-Verfahren wie im obigen Beispiel zu verbessern.

(Antrittsvortrag Masterarbeit)

Florentin Neumann, University of Paderborn

June 1, 2011, 2:00 p.m., F1.110

Local Computation of Spanners with Slack

A subgraph spanner of a weighted graph G is a sparse representation that preserves distances approximately between all pairs of nodes. In contrast, a slack spanner of G only preserves a large fraction of the pairwise distances. In this thesis, we present the first known distributed algorithm for computing slack spanners for arbitrary weighted graphs. Given any n-node weighted graph G, the metric space induced by it, its unweighted diameter k, and 0<epsilon<1, it is shown how to locally compute an epsilon-uniform slack spanner of G in the synchronous CONGEST model in O(k) rounds. In this model the maximum message size is limited to O(log(n)) bits. The spanner obtained is a subgraph of G that consists of O(n) edges and guarantees O(1/epsilon)-stretch for all but an epsilon-fraction of the pairwise distances. In contrast, all previous distributed spanner constructions give stretch guarantees on all pairwise distances, but at the expense of increased spanner size. E.g., there are graphs for which those constructions yield O(loglog(n))-spanners of size O(n^{1+1/loglog(n)}), whereas we can obtain O(loglog(n))-stretch spanners of size O(n), in granting this spanner to have an (1/loglog(n))-slack. Although our algorithm's worst case running time depends on the unweighted diameter of the input graph, we give evidence that this is non-trivial within the chosen model. These results are complemented by an algorithm for computing slack spanners on the complete network: Given any arbitrary metric space M=(V,d) of size |V|=n and 0<epsilon<1, we present an algorithm that locally computes a graph spanner for M with epsilon-uniform slack and a constant stretch factor in O(1/epsilon) synchronous rounds.

(Final Presentation Master Thesis)

Kamil Swierkot, University of Paderborn

June 1, 2011, 2:30 p.m., F1.110

Local Distributed Decision

A local algorithm is a distributed algorithm where each processor is a node in a graph and every processor is only capable to communicate with its neighbors.The algorithm is only allowed to make a constant number of communication rounds whereby the message size is not bounded. Between the communication rounds every processor can make arbitrary large computations. Over the last decades a lot of research has been made to find efficient algorithms for several problems in this distributed framework. Thereby reasearchers have produced impressive positive and impossibility results.

But just recently, researchers started to evolve a complexity theory for the local distributed framework in order to classify languages according to their hardness of solving them locally. In their seminal paper [1] Korman et al. introduced nondeterminism on the basis of certificates and local verifiers, complexity classes and local reductions. Like in traditional complexity theory the focus is on decision problems. This talk will give a brief overview of the concepts which will be of interest in my master thesis. Although Korman et al. have proven several structural properties, it is still unknown which languages belong to which complexity class. Therefore my thesis will focus on this aspect.

[1] P. Fraigniaud, A. Korman, and D. Peleg, Local Distributed Decision, Arxiv preprint arXiv:1011.2152 (2010).

(Introductory Presentation Master Thesis)

Andrew Schoenrock, Carleton University

May 18, 2011, 2:00 p.m., F1.110

Parallel methods for protein interaction prediction

Protein-protein interactions play a key role in many human diseases. A detailed knowledge of protein-protein interactions will accelerate the drug discovery process as well as reduce overall drug development costs. Biologists have a variety of methods to determine protein-protein interactions in the laboratory, however these approaches are generally expensive, labour intensive, time consuming and have significantly high error rates. The Protein-protein Interaction Prediction Engine (PIPE) is a computational approach to prediction protein-protein interactions. This talk will discuss the evolution of the PIPE project. The first version of PIPE proved that the underlying hypothesis that the algorithm is built upon had merit. PIPE2 offered a few major performance improvements which allowed for the first proteome-wide, all-to-all protein-protein interaction prediction within the S. Cerevisiae organism (yeast). Although PIPE2 was a significant improvement over the original PIPE, PIPE2 was still plagued an inefficient use of memory and an inability to easily scale to large supercomputers. In MP-PIPE a mixed master-slave/all-slaves parallel model was introduced which allowed for proteome-wide predictions to be made in both the C. Elegans and Homo Sapiens organisms. In addition to these projects, a relatively new project named PIPE-Sites which predicts the actual interaction sites on the proteins using the PIPE output data will also be discussed.

Patrick Briest, University of Paderborn

May 11, 2011, 2:00 p.m., F1.110

The Power of Uncertainty: Bundle-Pricing for Unit-Demand Customers

We study an extension of the unit-demand pricing problem in which a seller may offer bundles of items rather than the individual items for sale. If a customer buys such a bundle, she is guaranteed to receive one of the items contained in it, but the seller does not make any promises as to how the particular item is selected. This model is motivated by the pricing strategy of retailers like, who offers bundles of hotel rooms based on geographic location and star rating, and only identifies the booked hotel after the purchase has been made.

As the selected item is known only in hindsight, the buying decision depends on the customer's belief about the allocation mechanism. We study strictly pessimistic and optimistic customers who always assume the worst-case or best-case allocation mechanism relative to their personal valuations, respectively. While the latter model turns out to be equivalent to the well-studied pure item pricing problem, the former is fundamentally different, and we prove the following results:

(1) A revenue-maximizing pricing can be computed efficiently in the uniform version, in which every customer has a subset of items and the same non-zero value for all items in this subset and a value of zero for all other items.

(2) For non-uniform customers computing a revenue-maximizing pricing is APX-hard.

(3) For the case that any two values of a customer are either identical or differ by at least some constant factor, we present a polynomial time algorithm that obtains a constant approximation guarantee.

This is joint work with Heiko Röglin.

Philipp Brandes, University of Paderborn

May 4, 2011, 2:00 p.m., F1.110

Robust Distributed Computation in Dynamic Networks

Consider a distributed computation in a dynamic network with n nodes. We allow the topology of this network to be changed by an adversary in each round but a connected graph must always be maintained. The nodes communicate synchronously via broadcasting to their neighbors in every round. An algorithm for this setting has been presented which is able to correctly count the number of nodes in O(n²).

But the original algorithm does not deal with robustness. It is assumed that the edges between nodes are reliable. We assume independent, random edge failures. In the master thesis I will try to enhance the algorithm's fault tolerance against this and other types of failures such as network churn.

(Introductory Presentation Master Thesis)

David Maicher, University of Paderborn

April 20, 2011, 2:00 p.m., F1.110

Parametrisierung des Benutzerverhaltens bei der Bewegung durch 3D Szenen im Hinblick auf Sichtbarkeitsveränderungen

Ziel der Masterarbeit ist die Beschreibung der Bewegung eines Benutzers durch eine 3D Szene mit Hilfe von Parametern, die relevant für Sichtbarkeitsveränderungen sind. Geeignete Parameter könnten hier z.B. Bewegungsgeschwindigkeit, Rotationsgeschwindigkeit oder maximale Entfernung von Objekten sein. Die Auswirkungen von einzelnen Parametern auf die Sichtbarkeit werden innerhalb einer Evaluierung ermittelt. Hierbei werden Pfade innerhalb einer festen Szene betrachtet, die bestimmten Belegungen einzelner Parameter genügen. Aus den Änderungen der sichtbaren Objekte bzw. Polygone bei einer Bewegung entlang dieser Pfade lassen sich Informationen zur Relevanz der einzelnen Parameter gewinnen.

(Antrittsvortrag Masterarbeit)

Claudius Jähn, University of Paderborn

April 6, 2011, 2:00 p.m., F1.110

Creating parameterizations for virtual scenes: How to choose the sampling points

The efficiency of rendering algorithms for virtual scenes is influenced by the observer's position in the scene and several properties at that position -- like the amount of occluded geometry. In order to estimate the distribution of such a property globally for all positions in the scene, adaptive sampling techniques can be used. In this talk I am going to present different methods for choosing the positions at which the samples are evaluated, and methods for rating the quality of the resulting sampling distributions.