Wintersemester 2012/13

Daniel Jung

February 20, 2013, 2:00 pm, F1.110

Gathering stupid agents without extent

Assume we have a crowd of indistinguishable, point-shaped mobile agents, distributed on the plane. The agents have just very few features. Especially, they cannot communicate with other agents, only have a limited (constant) viewing range, are oblivious and do not have access to any global information of the current scenario.
We assure that this crowd cannot be divided into two subgroups such that none of the agents of one subgroup can see agents of the other subgroup. We say "the crowd is connected". We want these agents to build up some formations. As a start we want them to gather in one point. Previously, we have developed algorithms which can do this in time O(n^2), with n being the number of agents. My current work concentrates on lowering this upper bound to O(n), using the special case that agent positions are restricted to Z^2. First experiments suggest that my algorithm can guarantee this bound. The formal proof of this is still work in progress.

Max Drees

February 20, 2013, 2:30 pm, F1.110

Exciting insights into a new model for the page migration problem

In the usual model for the page migration problem, various clients and (initially closed) facilities are located inside a metric space. The task is to open a number of facilites and allocate each client to exactly one of them such that the costs are minimized. The total costs consist of the individual costs of each open facility and the costs for each connection between a client and a facility. The latter is proportional to the distance. In this talk, a new variation of this model is discussed.
In the new model, each client is connected automatically to every open facility within a certain distance. The closer such a facility is located to the client, the higher the profit for this specific connection. In other words, every client generates a payoff which is dependend on the number of open facilities in its vicinity and their distances to it. However, the payoff of each facility is capped once it reaches a constant bound. The task is to find an optimal set of facilities to open, such that the difference between the sum of the payoffs and the sum of the facility costs is maximized. 

Lukas Kopecki

January 23, 2013, 2:00 pm, F1.110

Rendering of Complex Animated Scenes in Real-Time

Todays simulations become more and more complex by using animated objects simulating machine and human behavior to acquire authentic simulation results. To overcome the complexity of such scenes an algorithm has been developed based on the randomized sample tree [Klein et al., 2002] which is able to render complex animated scenes.
The algorithm is capable of reducing the animation overhead caused by skeletal animation. His advantage is the distribution of joints stored inside the skeleton tree, defining the animations. The algorithm was able to reduce the animation processing time by reducing the joint animations, depending on the distance to the user.
To measure the animation quality an evaluation method has been developed measuring the animated pixel inside one scene. This allows the comparison of the stored polygons within one node by the developed algorithm and the randomized sample tree. This evaluation shows that the picture quality is at least as good as the randomized sample tree and that the developed algorithm animates more pixel within one scene.

(Final presetation Master Thesis, talk in German)

Ralf Petring

January 23, 2013, 2:30 pm, F1.110

Real-Time Rendering of Heterogenous 3d Scenes

Many virtual scenes are composed of data from different sources like laser scans, computer-aided design, 3d modeling etc. There is no single algorithm that is able to render such heterogenous data both fast and with good quality. Our method tackles this problem by splitting up the scene into regions in a preprocessing step. Each region can be rendered with different algorithms (conservative as well as approximate) later at runtime. In a second preprocessing step our method gathers information about the regions: how good do different algorithms perform on a region depending on the observer's position? The measured results (consisting of running time and image quality) are stored for every region. At runtime, these results are used to solve an optimization problem that minimizes the error in the rendered image while keeping the running time below a given threshold.

(Talk in German)

Markus Benter

January 9, 2013, 2:00 pm, F1.110

Queries with Weighting Functions in Distributed Markets

For enabling scalability, it is desirable to realize a market in a distributed fashion. While the offered products can change contiguously, customers can send queries to find products whenever they want. As it happens quite often that non of the products fits optimal compared to the desired one, the ability to find products which fit at least as good as possible is an advantage. However, how important a certain property of a product is may differ for each customer. Therefore, we allow the customer to define a rather general weighting function which describes how important each property is. This weighting function is included within the query. Given the query, the system has to find the best matching possible. Within this talk, we describe the problem statement and present first ideas for solutions.

Michael Knopf

December 19, 2012, 2:00 pm, F1.110

Scheduling Variants with Speed-Scaling via the Primal-Dual Approach

In this introductory talk I will outline the goals of my master's thesis. The focus of my work lies on energy-aware scheduling algorithms for speed-scalable systems. Such algorithms allow for significant energy savings and thus may help reducing the operational costs of data centers or the battery life of mobile devices. My work has two main objectives. On the one hand, I will investigate possible extensions of current models by the inclusion of interrupt costs (e.g., job-preemption). On the other hand, I will evaluate a novel analysis technique: the so-called online primal-dual approach. This technique utilizes duality theory to derive suitable online algorithms. In my talk I will give a brief introduction to the general problem of online scheduling with speed-scaling, existing models and algorithms, as well as the role of the online primal-dual approach in this context.

Jonas Knoll

December 12, 2012, 2:00 pm, F1.110

Approximative Rendering using Sample-Based Occlusion Culling

In this introductory talk I will outline the goals of my master's thesis in which I focus on developing a new approximative rendering algorithm based on existing ones, namely the Spherical Visibility Sampling and the Blue Surfels algorithm. The new algorithm will use techniques of both of them. In a preprocessing step visibility information of the triangles are computed and visible triangles are stored in a list. These will be arranged within this list in such a way that every prefix is a good representation of the part of the scene. Now, similar to the Blue Surfels algorithm, the scene is rendered by using only a subset of triangles (prefix of list) depending on its projected size. This way the number of rendered triangles will be reduced to achieve real-time rendering of complex 3-d scenes. Besides the concept of the new algorithm I will briefly present the Randomized Sample Tree, Spherical Visibility Sampling, and Blue Surfels rendering algorithms.

(Introductory presentation Master Thesis, talk in German)

Bastian Degener, 3con Management Consultants

December 7, 2012, 4:00 pm, F0.231

Strategic Management Consultants for the Process Industry

The presentation gives insight into consulting at 3con, a strategic management consultancy for the process industry. Furthermore, a typical consulting project will be demonstrated based on a real case, dealing with site optimization of a chemical production site. Finally, I show career perspective at 3con, using my own experiences as an example.

Sebastian Abshoff

December 5, 2012, 2:00 pm, F1.110

Towards More Controllable Dynamic Networks

The study of dynamics in networks has received great attention in the last years. Kuhn et al. model a dynamic network as an edge-dynamic graph that changes arbitrarily and adversarially from round to round as long as the graph remains connected in each round. Nodes in the network have a unique ID and each node may send one message of size O(log(n)) bits per round, where n is the number of nodes. A message sent by some node in round r is delivered to its neighbors in the graph of the following round r+1. Computation in this model requires termination, i.e. each node must reach a final state in which it outputs the result of the computation, does not send any message, and does not react to any incoming message. Problems studied in this model include counting and information dissemination problems.

In my talk, I want to give an overview about closely-related papers as well as my own research building upon that model. Unfortunately, many restricted dynamic models seem to be as hard as the original, unrestricted dynamic model. Moreover, many other problems that can be studied in this setting lead back to the same fundamental questions that have already been posed in the paper by Kuhn et al. and remain open since then. For this reason, I want to present some model extensions that allow for more control and that raise new questions, some of them related to the field of self-stabilizing algorithms.

Andreas Cord-Landwehr

December 5, 2012, 2:30 pm, F1.110

Convergence of Self-Organizing Networks

Self-organizing networks with autonomously acting peers are a natural way to model the evolution of networks, like the Internet. In those networks, each participant can perform selfish local network changes, which not only reduce her private communication costs, but also affect the costs of other participants. For example, this can be a peer who copies a file from another peer to herself: By this, the file is not only available to the copying participant, but it is also better accessible by the entire network. Comparing the equilibria states of the network models that were formulated over the last decade, it turns out that results for the Price of Anarchy are usually low and often constant. But when looking at the convergence to equilibria states, most models will never converge. In this talk, I want to introduce a new model that shall help shedding light on the efficiency loss (in terms of convergence time) by the selfish actions of the peers. For this, we combine the network creation game with an underlaying static network topology as it is usually available in real networks.

Claudius Jähn, University of Paderborn

November 28, 2012, 2:00 pm, F1.110

Rendering Complex Scenes with Blue Surfels

In this talk I am going to present a new algorithm for rendering complex virtual 3D scenes in real time. The basic idea is approximating complex parts of the scene by rendering a set of points. The points are computed in a preprocessing step and offer two important properties: Firstly, they are placed only on the visible surface of the scene's geometry (thereby, the method provides features of offline occlusion culling). Secondly, they are distributed and sorted in such a way, that every prefix of points is a good visual representation of the approximated part of the scene. An early evaluation of the method shows that it is capable of rendering scenes consisting of several billions of triangles with high image quality.

Benjamin Eikel, University of Paderborn

October 24, 2012, 2:00 pm, F1.110

Rendering Complex Scenes with Spherical Visibility Sampling

The rendering system I developed uses preprocessed visibility for rendering complex 3D scenes. In a preprocessing step, visibility information is determined and stored into a hierarchy of spheres. Every sphere stores visibility information of the objects in its volume. During runtime, a subset of the spheres is traversed depending on the viewers position. The visibility information is used to send only potentially visible objects to the rendering pipeline To better support complex scenes that contain viewpoints with little occlusion, the system has to be supplemented with an approximate rendering approach. The already available visibility data will be utilized to estimate the visual importance of objects for the final image. I will present the current state and the future tasks of my ongoing work.

Christoph Weddemann, University of Paderborn

October 17, 2012, 2:00 pm, F1.110

Robot Swarm Movements on a Grid

A crowd of fishes is a swarm. Although there are thousands of individuals, they act and move like one global unit. Each fish can only see and therefore influence a small part of the swarm. If there is one fish leading the swarm, then the question is how do the others get to know the direction to move?

We adapt this problem for swarm theory and consider a swarm of indistinguishable robots on a two dimensional grid. One robot is assigned as a leader, and has additional knowledge of a fixed path. The leader moves on this route, and the other robots shall follow him, without losing the connection of the swarm. In this presentation we will get to know the problem, and take a look at first ideas for moving strategies. Thus we want our robots to use only local information, we will see what difficulties may arise.

(Introductory presentation Master Thesis, talk in German)