Winter Term 2011/12

Daniel Jung, University of Bonn

March 14, 2012, 2:00 p.m., F1.110

Multi-Agent Exploration of Trees

Task is the online exploration of graph-theoretic trees using k robots (agents). Whereat exploration denotes that every tree node has to be visited by a robot. There is no global control at this, meaning that the robots execute the exploration algorithm autonomously. Robots are liable to a certain form of communication (here: global and local), which specifies whether they can communicate with each other over arbitrary distances or for example, only during a visit of the same node. Here, the time model shall function as the cost model: For each edge visit, a time unit is charged. Since several agents are exploring simultaneously, their simultaneous movements are only charged once.

This diploma thesis concentrates on the upper and lower bounds of the competitive ratio during the online multi-agent exploration of trees. To calculate the competitive ratio, the complexity of an ideal offline solution for the observed problem is required. The diploma thesis sets out here: After the presentation of a proof of the problem’s NP-hardness, a 4-approximation algorithm for the ideal offline solution is introduced.

The for the online problem so far best known and only on k dependent common lower and upper bounds of the competitive ratio, Omega(log(k)/loglog(k)) and O(k/log(k)) respectively, lie wide apart from each other and therefore offer considerable optimization potential. After elaborating the paper on the lower of these bounds, it is firstly tried to further optimize this lower bound. 

After the presentation of the common upper bound follows the analysis of several special cases of trees: These are the perfect binary, the AVL- and the Fibonacci-Tree, as well as the related Jellyfish tree from the proof of the common lower bound. Here, several exploration strategies are developed. Especially for the Jellyfish tree, a universal algorithm concept is derived, which helps to efficiently explore exotic variations of the original Jellyfish tree, some of which are analyzed here too. This particularly enables statements on how far the common lower bound is optimizable with the help of the approach from the respective paper.

Besides the specifically developed algorithms from this diploma thesis, the concept of so-called Sparse Trees is applied to the examined tree types. Here, competitive ratios can be specified with the help of a defined density measure of the explored trees.

Some of the papers utilized in this diploma thesis have been error corrected intensely, which led to the duplication of the approximation quality in the case of the above approximation algorithm.

Yinan Li, University of Paderborn

March 9, 2012, 11:00 a.m., F1.110

Parallel Computation of Welding Process Data

In the automotive industry welding is one of the most important methods to mate metals. Welding defects can disrupt the homogeneity of the seam and thus have a negative effect on the strength of the seam. Quality assurance requires that the seams of all products have to be examined and all welding defects have to be discovered. In-situ testing increases the efficiency of production because it is performed concurrent to the production process.

The company Benteler Automotive GmbH developed a novel, automated weld seam inspection system. With this the welding processes are monitored and evaluated in-situ. The typical weld defects such as cracks, voids and bonding defects can be detected by the newly developed system.

In the context of this paper the already existing program and its algorithms of the weld inspection system have been parallelized with a focus on the architectural features of modern graphic cards (GPU) and CPUs. By examination the optimization of the algorithms with CUDA implementation (Compute Unified Device Architecture) the part of the algorithm can be accelerated to 4.5 times. Through implementation of OpenMP (Open Multi-Processing) on the CPU the non-algorithmic part in the program is able to run 1.3 times faster. All in all the execution with OpenMP and CUDA generates a 3.3 speed-up factor in opposite to a CPU core.

(Final Presentation Master Thesis)

Benjamin Koch, University of Paderborn

February 22, 2012, 4:30 p.m., F1.110

Combination of algorithmic methods and control theory to optimize chains of mobile relays

Two robots can be connected by a chain of mobile relays. This communication chain should be as short as possible. To shorten such a chain several strategies have been proposed and analysed. Most of them use discrete time steps.

In this bachelor thesis I have implemented several strategies on a real robot to test them in practice. In a real system using discrete time steps doesn't make much sense, unless limitations of the robot preclude continuous measurements or movements. Therefore I use (pseudo-)continuous strategies.

In this thesis I propose a method that can be used to transform discrete strategies into continuous strategies by adding a closed-loop position controller. The transformed strategies cannot by analysed by any of the normal methods, but mechatronics and control theory can be used to examine them and tune parameters that arise in the transformation.

Several practical concern have to be considered for real robots: They mustn't collide but instead evade each other. Furthermore superfluous robots should be removed from the chain. Some strategies can do that, but this ability is lost in the transformation. I have implemented a remove algorithm that can work for all strategies and is independent of the time model.

(Final Presentation Bachelor Thesis)

Sven Kurras, University of Paderborn

January 25, 2012, 2:00 p.m., F1.110

Distributed Sampling of Regular Graphs

Generating random graphs, called graph sampling, is used as a tool in various applications as well as of its own theoretical interest. In the context of peer-to-peer networks, one is interested in sampling d-regular graphs that provide high connectivity and low diameter while being robust against node failures and adversarial attacks. Distributed strategies are preferred, because any central server is a potential bottleneck and makes such a network vulnerable. Since random graphs do provide the above properties with high probability, a promising distributed strategy is to frequently randomize the connections between the nodes by a simple local graph perturbation, aiming to achieve an unknown global structure that is sufficiently close to random. An example for such an operation is the k-Flip, where the outer edges of a random (k + 2)-walk are flipped. The most challenging part of the analysis is to quantify the speed of convergence towards the long-term probability distribution, denoted as the mixing time of the underlying Markov chain on all connected d-regular graphs towards its stationary distribution. For that purpose, some advanced proof techniques were evolved in the last decades that are able to deal with such highly combinatorial structures, but which still need further refinements.

This thesis focuses on working out the present state of the art of this research field, as well as on contributing some new insights and theoretical results. After an introduction into the theoretical background of distributed sampling strategies, the first main focus lies on an extensive report on the most powerful proof techniques from this field, especially on the latest refinements that are not covered by any standard literature. The second main focus lies on three independent results of my own:

  1. an extension to the k-Flip to make it applicable whenever its flip edges do not induce a triangle
  2. a generalization of an existing convergence rate proof for the 1-Flip to the k-Flip
  3. lower bounds that allow to analyze principle limitations of the prominent canonical path technique

In my talk, I provide insights into this research field and some of my results, with focus on the canonical path technique.

(Final Presentation Master Thesis)

Tobias Bertel, University of Paderborn

January 11, 2012, 2:00 p.m., F1.110

Development of a Data Structure for Real-Time Rendering with Avoidance of unnecessary State Changes

OpenGL 1.5 (2003) introduced Buffer Objects (Batches) allowing a programmer to store the primitives of a scene (i.e. Vertex Arrays) in the VRAM of the GPU. This reduces the amount of data transfers between CPU RAM and GPU VRAM during a scene picturing massively.

This thesis investigates the rendering of textured and static 3-D scenes with complexities ranging from 100.000 up to 2,3 millions of triangles with respect to several rendering approaches using Batches. The primary aim is to check whether the examined rendering approaches are able to deliver interactive framerates. While doing this the amount of required state changes of the OpenGL to draw one single frame is measured in order to obtain a relationship between framerate and state changes.

In general I compare two rendering approaches: On one hand all primitives are sorted with respect to their texture usage (State Sorting) and drawed in every frame. This approach yields one batch for each texture used in the whole scene. On the other hand the primitives are distributed in an Octree and Batches are built in the nodes of the Octree. The picturing of the Octree is realized by a depth-first-search tree traversal beginning at the root of the Octree with disabled View-Frustum-Culling by default. This setting allows to examine the changes of the framerate depending on the use of optimal Batches (State Sorting approach) despite of the use of non-optimal Batches resulting of the hierarchical distribution of the scene by using an Octree.

The Evaluation shows among other things that the rendering approach based only on State Sorting yields better average frame rates than the approach using an Octree with Batches as inner nodes even with an enabled View-Frustum-Culling. That is remarkable in respect of that much of the scene geometry were culled in some of the chosen test paths.

(Final Presentation Bachelor Thesis)

Kathrin Bujna, University of Paderborn

December 21, 2011, 2:00 p.m., F1.110

Parallel Real-Time Rendering using Heterogeneous PC Clusters

We are given an arbitrarily formed chain of mobile relay robots that connects two base stations. Each relay does only know its two neighbours in the chain, which have to be within its viewing range. The goal is to shorten this chain as far as possible. The problem is that each relay can base its decision where to move only on the relative position of its neighbours. There already exist different local strategies for this short-chain-problem, such as the Hopper, the Go-to-the-Middle, and the Move-on-Bisector strategy.

In my master's thesis I have elaborated on the question whether the existing strategies can be improved. To this end, I have generalised and parametrised the existing strategies and made use of optimisation techniques known from machine learning to find out which of the parameter combinations (i.e., concrete strategies) works best regarding certain experiments. To be more precise, I have shown how the strategies can (or cannot) be improved with respect to different classes of initial chains and different performance measures (i.e., the runtime, the travelled distance, and a measure that combines both). For example, it can be shown that the so-called shorten operation of the Hopper strategy is not necessarily needed. In our experiments the respective strategy achieves a performance similar to the classical Hopper, and it can be proven that a relative chain length of 2 is always reached. Another example concerns the Go-to-the-Middle strategy. In the experiments it has turned out that, regarding the runtime as well as the travelled distance, it seems to be better to let the relays move to the middle between their neighbours in a sequential order, rather than moving all relays in parallel.

(Final Presentation Master Thesis)

Tim Süß, University of Paderborn

December 14, 2011, 2:00 p.m., F1.110

Parallel Real-Time Rendering using Heterogeneous PC Clusters

Often 3D scenes created with CAD applications have a high geometric complexity. There are several concepts (like out-of-core rendering, levels of detail, parallel rendering) to render such scenes in real-time. This work focuses on the usage of heterogeneous PC clusters for parallel real-time rendering of highly complex scenes. For three different scene types specific rendering approaches were developed, where a small group of high-end computers is supported by a large number of weaker PC cluster nodes. The First scene type consists of static scenes that can be stored completely in a single computer's main memory, while the scenes of the second type exceed this memory limitations. The scenes of the last type contain not only static but also dynamic objects.

Ralf Petring, University of Paderborn

December 7, 2011, 2:00 p.m., F1.110

Multi Algorithm Rendering

There are many rendering algorithmsknown in computer graphics. Each of them performs well for a specific type of scene but they may perform poorly on other types of scenes. The problem we consider in this talk are scenes which consist of different scene types combined in a single scene. For such scenes there is no single optimal algorithm. We present an idea how to use many algorithms in a single frame to display different parts of the scene. At this we consider sampled observer positions to find the best out of a given set of algorithms for the different scene parts. We consider culling algorithms as well as approximation algorithms. In this talk, we focus on the final decision which algorithms to use at a specific position during runtime. This is an optimization problem whose conditions can change from frame to frame.

Alexander Maecker, University of Paderborn

November 30, 2011, 2:00 p.m., F1.110

Analysis of a continuous, local strategy for constructing a short chain of robots

Optimization of a communication chain, consisting of a group of mobile robots with limited capabilities, between two base stations is a well-known problem. The Move-On-Bisector-strategy, that has been introduced in [1], is an efficient continuous, local strategy that provides a possibility to move the robots to the straight line between those base stations in time O(n). For that the robots adjust their movement only based on the positions of their neighbours.

The intention is to analyse this strategy in a modified and less idealised model. In contrast to [1] it is supposed that a robot measures the positions of the neighbouring robots with a constant delay. With respect to that the strategy has been considered theoretically in different configurations. In addition to that some further insights regarding some properties of the strategy have been gained by simulations.

[1] Degener, Bastian; Kempkes, Barbara; Kling, Peter; Meyer auf der Heide, Friedhelm. A continuous, local strategy for constructing a short chain of mobile robots. In: Sirocco '10: Proceedings of the 17th International Colloquium on Structural Information and Communication Complexity Bd. 6058, Springer, 7 - 11 Juni 2010, S. 168-182.

(Final Presentation Bachelor Thesis)

Benjamin Eikel, University of Paderborn

November 23, 2011, 2:00 p.m., F1.110

Spherical Visibility Sampling for a new Data Structure

I am developing a new data structure for rendering of complex 3D scenes. The idea for this new data structure is based on the Randomized Sample Tree [1] and its shortcoming of not taking visibility information into account. The new data structure shall store visibility information for different viewing directions onto an object.

This information can be used during walking through the virtual scene. When the object is to be displayed, the current viewing direction can be determined and only parts that were found visible before have to be displayed. Because the visibility information can neither be collected nor stored for the infinitely many viewing directions, a sampling approach is used. Sampling positions are created on the surface of the bounding sphere of the object, and visibility information is collected for these positions. I will present my current ideas, results, and problems concerning the ongoing work on the spherical visibility sampling.

[1] Jan Klein, Jens Krokowski, Matthias Fischer, Michael Wand, Rolf Wanka, and Friedhelm Meyer auf der Heide. The Randomized Sample Tree: A Data Structure for Interactive Walkthroughs in Externally Stored Virtual Environments. In Proceedings of the ACM symposium on Virtual reality software and technology (VRST '02)

Yinan Li, University of Paderborn

October 26, 2011, 2:00 p.m., F1.110

Parallel Computation of Welding Process Data

In the automotive industry welding is one of the most important methods to mate metals. Welding defects can disrupt the homogeneity of the seam and thus have a negative effect on the strength of the seam. Quality assurance requires that the seams of all products have to be examined and all welding defects have to be discovered. In-situ testing increases the efficiency of production because it is performed concurrent to the production process.

The company Benteler Automotive GmbH developed a novel, automated weld seam inspection system. With this the welding processes are monitored and evaluated in-situ. The typical weld defects such as cracks, voids and bonding defects can be detected by the newly developed system.

In the context of this thesis already existing algorithms of the weld inspection system should be parallelized with a focus on the architectural features of modern graphics cards (GPU) and CPUs. The runtime behaviour, as well as further optimizations of the algorithms, should be examined. The results should be confirmed with a running prototype.

(Introductory Presentation Master Thesis)

David Maicher, University of Paderborn

October 19, 2011, 2:00 p.m., F1.110

Visibility changes in virtual scenes caused by user's behaviour

The aim of this Master's thesis is the knowledge of influences of users' movement behaviour within a 3D scene on visibility changes. To achieve this at first the user behaviour is described by parameters like e.g. movement speed or rotation speed. The influences of individual parameters on visibility changes are determined within an evaluation. Here are paths considered that are within two fixed scenes and match certain parameter values. Information on relevance of the individual parameters can be gained by moving along these paths and measuring the changes in visibility. On the basis of this information a heuristic is proposed which can estimate visibility changes for given paths and thus allows to differentiate between different paths.

(Final Presentation Master Thesis)

Project Group: NODES - Offering Dynamics to Emerging Structures, University of Paderborn

October 12, 2011, 2:00 p.m., F1.110

Our project group focuses on the research area of local strategies in dynamic networks. Changes that lie outside of the control of the network are referred to as external dynamics. Modifications initiated by the system itself are referred to as controlled dynamics. We want to investigate how controlled dynamics can be used to counteract external dynamics. By using local strategies we aim at ensuring scalability and robustness.

We will consider self-stabilizing small-world networks and networks in which the positions of the nodes are adapted based on their communication behavior. Furthermore, we will look at networks in which nodes exhibit certain roles. In particular, we will look at a certain coloring and facility location problem.

To study the behaviors of our algorithms and gain new insights, we developed a simulator. We will give a short demonstration of the simulator.

(interim report)

Kamil Swierkot, University of Paderborn

October 12, 2011, 2:30 p.m., F1.110

Complexity Classes for Local Computation

Just recently Korman, Peleg and Fraigniaud have introduced a complexity theory for the local distributed setting. They have defined three complexity classes: LD(Local Decision), NLD(Nondeterministic Local Decision) and NLD#n whereby it is required that the algorithms use at most a constant number of communication rounds. In order to define the classes NLD and NLD#n they defined nondeterminism for the distributed setting by the use of certificates and verifiers.

In my master's thesis, I have studied different aspects of this complexity theory. On the one hand, I give a comprehensive classification of languages whose computational tasks are of interest in the research of distributed algorithms. On the other hand, I have found two hierarchies within the complexity classes LD and NLD. The hierarchy for LD states that one additional communication round is sufficient to decide more languages. The hierarchy within NLD is based on the size of the certificates that are needed to verify a language.

In my talk, I will briefly introduce the needed concepts of this complexity theory. Moreover, I will give some additional definitions and an overview of the main results.

(Final Presentation Master Thesis)