Home > Research Groups > Algorithms and Complexity > Teaching > Gems of Theoretical Computer Science

Gems of Theoretical Computer Science

Seminar, Winter Term 2021/22

The seminar based on a set of selected papers and textbook sections that demonstrate the beauty of problem solutions in the field of Theoretical Computer Science. We will find out that the preoccupation with sophisticated proof techniques, elegant arguments and surprising constructions is highly enjoyable. The seminar is inspired by the book "Perlen der Theoretischen Informatik " by Uwe Schöning, in which he presents a collection of results demonstrating highlights of Theoretical Computer Science. Of course, the selection of seminar topics is affected by the supervisor’s taste and research area.

 

Lecturer

Modul

  • Seminar I / Seminar II

Dates & Deadlines

The seminar is expected to take place as a block seminar at the end of the lecture period - we hope that we will be able to meet in person, but if this is not the case, we will use BBB.

  • First meeting (distribution of topics):
    11.10., 4pm (F1.110)
  • Latest period for the initial meeting with your advisor (You are expected to have read the paper, and come with concrete questions):
    1.11. - 3.11.
  • Short Talks (5-10 minute introduction to your topic):
    11.11., 9am - 12am (room tba)
  • Submission deadline of the 0-version of seminar thesis (see Step II below):
    13.12., 11:59pm
    (We strongly recommend to give at least part of the thesis to your advisor in advance to this deadline, in consultation with him)
  • Submission deadline of the reviews (see Step III below):
    23.12., 11:59pm
  • Start test presentations (see Step IV below):
    10.1. - 14.1.
  • Submission deadline of the final version of the seminar thesis:
    30.1., 11:59pm
  • Block Seminar, Presentation of your work (see Step I below):
    2.2. & 3.2. (room & time tba) 

Topics and Registrations

The topics (see below) are given in the first meeting. Reservations or defaults of issues are not possible. You should have a look at the topics (see below) and respective papers before the first meeting and make your mind up about your topic preferences. The application for this Seminar is managed through the Jupyter System. Klick here for more information on the application procedure.

Demands

I) Presentation

45 minutes presentation.
15 minutes discussion and questions.
Your audience: Your fellow students.
    
As a minimal requirement, the presentation should motivate your model, formally introduce the problem, explain the necessary techniques, and provide insight into the necessary algorithms and their analysis (i.e., with proofs!). Some ideas e.g. "Scientific Working"

II) Seminar thesis

Essay of length 12 to 20 pages written according to the standards of a scientific paper. Submitting work of others is cheating and a violation of academic plagiarism rules, which is a serious offense. We use a commercial, Internet-based plagiarism detection service to check your submissions. Also the simple copying of the original work into the own thesis causes the failure of the seminar.

Note that the 0-version of your thesis should be as closely to your final thesis as possible. In particular, it must be complete content-wise (the minimum number of pages also applies to the 0-version!).

III) Reviews

  • Peer review procedure similar to scientific publications
  • You submit your thesis (paper) at the seminar's Panda page
  • Some (2) peers (other students) review your submission
    • Read and understand the submitted paper
    • Criticize your paper
    • Make recommendations on how to improve
    • Be honest, polite, and helpful when writing your reviews
  • Each student has to write 2 reviews (each 1-2 pages)

IV) Test run of your presentation

  • Do a test run with your presentation.
  • Meet in groups of approx. 3-4 students
  • Give each other feedback
  • Negotiate date yourselves (see Panda page for the participants' email adresses)

We expect your active participation (as opposed to mere attendance) at all presentations of this seminar.

Grading

Your final grade will be based on the following:

  • Quality of your seminar thesis
  • Quality of your final presentation
  • Commitment, diligence, and compliance with our deadlines.

Note that we will grade each of the above parts individually, and that you have to pass each part (i.e., 4.0 or better) to pass this course.

Remarks

General

Each topic has one or several references that should be understood as starting points for your research. Hence, it is very likely that you will need further papers or textbooks to fully understand your topic. The actual content of your seminar thesis as well as your seminar talk should be discussed with your adviser.
It is your responsibility to contact your adviser and schedule appointments with them early enough. We strongly recommend you to discuss questions regarding the topic, the composition, and the structure of your written seminar thesis with your adviser. The same holds for your seminar talk. Note that finding an appointment directly before your thesis' submission or seminar talk cannot be guaranteed, since the advisers need to arrange their appointments with their own teaching and research.
If you are looking for additional research papers, the following links might be helpful:

Seminar Thesis

We expect an essay of length 12 to 20 pages written according to the standards of a scientific paper. A simple summary of the referenced literature is insufficient. We expect you to identify the important ideas and proofs of the referenced literature and present them in a correct and appropriate way. The mandatory typesetting for your thesis is LaTeX (see below for details). All citations must be marked as such and referenced with their origin. Plagiarism directly leads to failure of the seminar. Note that plagiarism not only includes copies from the original literature, but also copies from other authors (e.g., other seminar theses) or translations of them.

Title and Abstract

Create a brief abstract that summarizes the problem and the main results. An abstract is kept short (typically 10-20 lines). Also make up a title for your thesis (note that this is not necessarily the title of the used literature or the title from our topic list). It is important to make your own reasoning about a title that describes the thesis best from your point of view.

References

Each thesis contains a reference list that contains at least the literature used in your thesis. In the digital libraries of DBLP, ACM, IEEE, or Springer you can find detailed literature reference information. For conferencing proceedings or journals ALWAYS specify the pages. For journals, additionally provide volume and journal number. Do not rely on the authors to do their references correct (e.g., reference list in a paper or at the author's website). Sometimes this references are wrong or incomplete. Also do not rely on search engines like CiteSeer or Google Scholar: always verify the correctness and completeness of your references. Especially, it is insufficient if you only provide links to the papers or citations in the web.

Further Reading

For further reading we advice


Topics

Topic 1:   Upper bounds for the depth of linear decision trees

Linear decision trees (or linear search algorithms, LSAs) operate on inputs x from R^n and proceed by evaluationg predicates of the form "f(x)>0?", where f:R^n -->R is an affine function, i.e., f(x) = \sum_{i=1}^n {a_i x_i} - b, for real numbers a_1,..., a_n, b. For example, the famous NP-complete Knapsack problem can be decided by LSAs, one LSA for each dimension n. The major result of this topic is the construction of a family M_1, M_2, ... of LSAs so that L_n decides the knapsack problem for inputs from R^n (the so-called n-dimensional restriction of the knapsack problem) in time polynomial in n. (Convince yourself that this does not imply NP=P.) This result is extended to a variety of other problems, even some which are not known to be in NP.

Literature:

Advisor: Friedhelm Meyer auf der Heide

 

Topic 2:   Lower bounds for algebraic computation trees

Algebraic computation trees (ACTs) operate on inputs x from R^n and proceed by either computing functions g:R^n -->R  as  the sum or product of previously computed functions, or evaluating predicates of the form "f(x)>0?" for a previously computed function f. Note that f is a polynomial. In case we do not allow multiplying functions, f is a linear function, and the resulting ACT is an LSA (see the above topic.). The results presented in this topic are the component counting lower bound for LSAs and ACTs, and examples like a quadratic lower bound for the knapsack problem.

Literature:

Advisor: Friedhelm Meyer auf der Heide

Topic 3:   On Integer Programming and Convolution

The authors present a new algorithm with pseudo-polynomial running time for the fundamental integer programming problem. The results are achieved in part via a smart application of the so called Steinitz Lemma. Furthermore, a surprisingly close connection to the (min, +)-convolution problem is established. The paper also discusses applications to packing and scheduling problems.

Literature:

Advisor: Marten Maack

Topic 4:   A Fast and Simple Algorithm for the Money Changing Problem

The paper presents a surprisingly simple and efficient algorithm for the so called Money Changing Problem and discusses applications. In the Money Changing Problem k positive integers a₁ < ... < aₖ and a query integer M are given. The question is whether there is a linear combination c₁a₁ + ... + cₖaₖ with non-negative integers cᵢ and if so how such a decomposition can be found. If the input numbers are co-prime, there is a largest integer, called the Frobenius number, that does not have such a decomposition. There is a data structure called the residue table that can be used to compute the Frobenius number in time O(a₁). The main result of the paper is an algorithm that computes the residue table in time O(ka₁). This approach can be used to solve the Money Changing Problem efficiently.

Literature:

Advisor: Marten Maack

Topic 5:   A local-ratio theorem for approximating the weighted vertex cover problem

The local ratio technique is a methodology for the design and analysis of algorithms for a broad range of optimization problems. The technique is remarkably simple and elegant, and yet can be applied to several classical and fundamental problems (including covering problems, packing problems, and scheduling problems). The local ratio technique uses elementary math and requires combinatorial insight into the structure and properties of the problem at hand. Typically, when using the technique, one has to invent a weight function for a problem instance under which every “reasonable” solution is “good”. The local ratio technique is closely related to the primal-dual schema, though it is not based on weak LP duality (which is the basis of the primal-dual approach) since it is not based on linear programming.

Literature:

Advisor: Manuel Malatyali

Topic 6:   Analysis of UML Activities Using Dynamic Meta Modeling

Dynamic Meta Modeling (DMM) is a universal approach to defining semantics for languages syntactically grounded on meta models. DMM has been designed with the aim of getting highly understandable yet precise semantic models which in particular allow for a formal analysis. In this paper, we exemplify this by showing how DMM can be used to give a semantics to and define an associated analysis technique for UML Activities.

Literature:

Advisor: Christian Soltenborn

Topic 7:   Autonomous Mobile Robots: Refining the Computational Landscape

Within distributed computing, the study of distributed systems of identical mobile computational entities, called robots, operating in a Euclidean space is rather extensive. When a robot is activated, it executes a Look-Compute-Move cycle: it takes a snapshot of the environment (Look); with this input, it computes its destination (Compute); and then it moves towards that destination (Move). The choice of the times a robot is activated and how long its cycle lasts is made by a fair (but adversarial) scheduler; three schedulers are usually considered: fully synchronous (Fsync), semi-synchronous (Ssync), and asynchronous (Async).Extensive investigations have been carried out, under all those schedulers, within four models, corresponding to different levels of computational and communication powers of the robots: OBLOT(the weakest), LUMI (the strongest), and two intermediate models FSTA and FCOM. The many results for specific problems have provided insights on the relationships between the models and with respect to the activation schedulers. Recently, a comprehensive characterization of these relationships has been provided with respect to the Fsync and Ssync schedulers; however, in several cases, the results were obtained under some restrictive assumptions (chirality and/or rigidity). In this paper, we improve the characterization by removing those assumptions, providing a refined map of the computational landscape for those robots. We also establish some preliminary results with respect to the Async scheduler.

Literature:

Advisor: Jannik Castenow

Topic 8:   Polylogarithmic-Time Deterministic Network Decomposition and Distributed Derandomization

We present a simple polylogarithmic-time deterministic distributed algorithm for network decomposition. This improves on a celebrated 2^O(sqrt(log n))-time algorithm of Panconesi and Srinivasan [STOC’92] and settles a central and long-standing question in distributed graph algorithms. It also leads to the first polylogarithmic-time deterministic distributed algorithms for numerous other problems, hence resolving several well-known and decades-old open problems, including Linial’s question about the deterministic complexity of maximal independent set [FOCS’87; SICOMP’92]—which had been called the most outstanding problem in the area.
The main implication is a more general distributed derandomization theorem: Put together with the results of Ghaffari, Kuhn, and Maus [STOC’17] and Ghaffari, Harris, and Kuhn [FOCS’18], our network decomposition implies that P-RLOCAL = P-LOCAL.
That is, for any problem whose solution can be checked deterministically in polylogarithmic-time, any polylogarithmic-time randomized algorithm can be derandomized to a polylogarithmic-time deterministic algorithm. Informally, for the standard first-order interpretation of efficiency as polylogarithmic-time, distributed algorithms do not need randomness for efficiency.
By known connections, our result leads also to substantially faster randomized distributed algorithms for a number of well-studied problems including (Δ+1)-coloring, maximal independent set, and Lovász Local Lemma, as well as massively parallel algorithms for (Δ+1)-coloring.

Literature:


Advisor: Jannik Castenow

Topic 9:   Autonomous mobile robots with lights

We consider the well known distributed setting of computational mobile entities, called robots, operating in the Euclidean plane in Look-Compute-Move (LCM) cycles. We investigate the computational impact of expanding the capabilities of the robots by endowing them with an externally visible memory register, called light, whose values, called colors, are persistent, that is they are not automatically reset at the end of each cycle. We refer to so endowed entities as luminous robots.
We study the computational power of luminous robots with respect to the three main settings of activation and synchronization: fully-synchronous, semi-synchronous, and asynchronous. We establish several results. A main contribution is the constructive proof that asynchronous robots, illuminated with a constant number of colors, are strictly more powerful than traditional semi-synchronous robots.
We also constructively prove that, for luminous robots, the difference between asynchrony and semi-synchrony disappears. This result must be contrasted with the strict dominance between the models without lights (even if the robots are enhanced with an unbounded amount of persistent internal memory).
Additionally we show that there are problems that robots cannot solve without lights, even if they are fully-synchronous, but can be solved by asynchronous luminous robots with O(1) colors. It is still open whether or not asynchronous luminous robots with O(1) colors are more powerful than fully-synchronous robots without lights.
We prove that this is indeed the case if the robots have the additional capability of remembering a single snapshot. This in turn shows that, for asynchronous robots, to have O(1) colors and a single snapshot renders them more powerful than to have an unlimited amount of persistent memory (including snapshots) but no lights.

Literature:

Advisor: Jonas Harbig

Topic 10:   Asymptotically Optimal Gathering on a Grid

In this paper, we solve the local gathering problem of a swarm of n indistinguishable, point-shaped robots on a two-dimensional grid in asymptotically optimal time O(n) in the fully synchronous FSYNC time model. Given an arbitrarily distributed (yet connected) swarm of robots, the gathering problem on the grid is to locate all robots within a 2 x 2-sized area that is not known beforehand. Two robots are connected if they are vertical or horizontal neighbors on the grid. The locality constraint means that no global control, no compass, no global communication and only local vision is available; hence, a robot can see its grid neighbors only up to a constant L1-distance, which also limits its movements. A robot can move to one of its eight neighboring grid cells and if two or more robots move to the same location they are merged to be only one robot. The locality constraint is the significant challenging issue here, since robot movements must not harm the (only globally checkable) swarm connectivity. For solving the gathering problem, we provide a synchronous algorithm -- executed by every robot -- which ensures that robots merge without breaking the swarm connectivity. In our model, robots can obtain a special state, which marks such a robot to be performing specific connectivity preserving movements in order to allow later merge operations of the swarm. Compared to the grid, for gathering in the Euclidean plane for the same robot and time model the best known upper bound is O(n2).

Literature:

Advisor: Jonas Harbig

Topic 11:   On the power of randomization in on-line algorithms

Against in adaptive adversary, we show that the power of randomization in on-line algorithms is severely limited! We prove the existence of an efficient "simulation" of randomized on-line algorithms by deterministic ones, which is best possible in general. The proof of the upper bound is existential. We deal with the issue of computing the efficient deterministic algorithm, and show that this is possible in very general cases.

Literature:

Advisor: Till Knollmann

 

Topic 12:   Non-Linear Ski Rental

We consider the following generalization of the classic ski rental problem. A task of unknown duration must be carried out using one of two alternatives called "buy" and "rent", each with a one-time startup cost and an ongoing cost which is a function of the duration. Switching from rent to buy also incurs a one-time cost. The goal is to minimize the competitive ratio, i.e., the worst-case ratio between the cost paid and the optimal cost, over all possible durations. For linear or exponential cost functions, the best deterministic and randomized on-line strategies are well known. In this work we analyze a much more general case, assuming only that the cost functions are continuous and satisfy certain mild monotonicity conditions. For this general case we provide (1) an algorithm that computes the deterministic strategy with the best competitive ratio, and (2) an approximation algorithm that, given ε>0, computes a randomized strategy whose competitive ratio is within (1+ε) from the best possible, in time polynomial in ε-1. Our algorithm assumes access to a black box that can compute the functions and their inverses, as well as find their extreme points.

Literature:

Advisor: Till Knollmann

Topic 13:   The Complexity of Scheduling Trees with Communication Delays

Consider the problem of finding a minimum-length schedule on m machines for a set of n unit-length tasks with a forest of intrees as precedence relation, and with unit interprocessor communication delays. First, they prove that this problem is NP-complete; second, they derive a linear time algorithm for the special case that m equals 2.

Literature:

Advisor: Simon Pukrop

Topic 14:   Scheduling on Two Unbounded Resources with Communication Costs

Heterogeneous computing systems became a popular and powerful platform, containing several heterogeneous computing elements (e.g. CPU+GPU). In this paper, we consider that we have two platforms, each with an unbounded number of processors. We want to execute an application represented as a Directed acyclic Graph (DAG) using these two platforms. Each task of the application has two possible execution times, depending on the platform it is executed on. Also, there is a cost to transfer data from one platform to another between successive tasks. The goal here is to minimize the finish execution time of the last task of the application (usually called makespan). We show that the problem is NP-complete for graphs of depth at least 3 but polynomial for graphs of depth at most 2. Finally, we focus on particular classes of graphs, by providing polynomial-time algorithms for bi-partite graphs, trees and 2-series-parallel graphs with different assumptions on communication delays.

Literature:

Advisor: Simon Pukrop
 

Topic 15:   Finite-time Analysis of the Multiarmed Bandit Problem

Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy’s success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.

Literature:

Advisor: Jonas Hanselle  

 


Submission of your Seminar Thesis

The thesis has to be created with LaTeX using our customized template, which you can download here, or you can clone the git-repository. We will create a complete volume consisting of all submissions.

You have to submit the LaTeX sources (your subfolder from the template is enough) AND a complete pdf of your seminar thesis. All files should be contained in only one zip or tar.gz file. Please do NOT include your matriculation number in your thesis.

For the LaTeX document you have to follow the following rules:

  • Each author gets his own prefix. (Example: the author "Matthias Fischer" gets his unique prefix "MaFi". Make sure there are no conflicts with other students.)
  • All your LaTeX files and pictures have to be contained in a single subfolder of the template using your prefix as name.
  • There is only one "*.tex" file for your complete seminar thesis.
  • The name of your LaTeX file consists only of your prefix supplemented by the suffix ".tex" (e.g., "MaFi.tex").
  • You can modify the "SI" folder and "SI.tex" in the template as a starting point.
  • The document must be compiled with pdflatex. Hence, pictures must be included as pdf, jpg or png.
  • To compile the document, you need to include your LaTeX file by adding the command "\includeseminar{...}" to the "main.tex" file in the root folder. (e.g., "\includeseminar{MaFi/MaFi}")
  • Your file must be encoded in UTF-8.
  • Your highest breakdown-level is \section. (Thus, you can use \section, \subsection, \subsubsection, \paragraph, etc.)
  • To distinguish all self-defined commands, labels and references, each author gets his own prefix. (Example: the author "Matthias Fischer" gets his unique prefix "MaFi".)
  • Each command, label and reference consists of the prefix, followed by an individual identifier. Thus, if Matthias wants to call a label "myCoolLabel", the label has to be defined by \label{MaFi:MyCoolLabel}.
  • After the \maketitle call in your LaTeX document, you have to add your abstract with the command "\abstract{ ... }". After the abstract add your local table of contents with the commands "\makeminitoc".
  • We only use the svmult theorem environment. This means, the option "nospthms" and the package "amsthm" can not be used. (The package "amsthm" is not compatible to "svmult".)
  • We do not use the BibTeX compiler to process literature references. Instead of this, you have to add your literature references like presented in the template file "SI.tex". You have to follow the style given in the template file, i.e., "author, title, year,...".
  • To format the pseudocode of your algorithms you can either use algorithmicx together with its floating environment "algorithm" from the package algorithms, or you can use the listings environment. We recommend to start with predefined commands as given by package algpseudocode. We do not use the package "algorithmic".
  • You are not allowed to modify the "main.tex" file (apart from the \includeseminar command) or any other file in the root folder of the template. This also means, that you cannot use any other packages as specified in the main file.