Startseite > Fachgruppen > Softwaretechnik > Lehre > Project Group ABM - Automated Benchmark Management

Project Group ABM - Automated Benchmark Management


Due to a large number of applicants, we kindly ask all interested students to solve this small challenge.

Please hand in your solutions (source code) by Monday, March 12th, 23:59 to and

Everyone is encouraged to submit an answer to the challenge, even incomplete. Participants will be selected based on code quality and demonstrated problem-solving skills.

We remain available should any questions arise concerning the challenge.

Presentation slides

Initial presentation slides from 29.01.2018.

Automated Benchmark Management (ABM)

A common way of testing software or research prototypes is to use well-known benchmark suites such as the DaCapo suite [1] for example. However, such large benchmarks are hard to create and to maintain, and this work is often done by hand. Moreover, benchmark suites currently only exist to test for specific properties, and are not necessarily adapted to the needs of the tested software. The Automated Benchmark Management (ABM) methodology [2] has been created to address the shortcomings of current benchmark suites. It aims at automatizing the process of benchmark creation and maintenance, and makes it fully customizable to the user, so that they can create benchmark suites adapted to their use. We are currently building a website [3] to implement the ABM methodology. The current implementation crawls GitHub for open-source, real-world projects. It allows users to filter out unsuitable projects, and create and update collections from the remaining projects.

The Project

The current workflow of ABM allows users to query various properties of GitHub projects to retrieve a fitting set of projects for the benchmark. However, many criteria such as code metrics, presence of security flaws, presence of bugs, or specific about the project build are not available from GitHub queries. Moreover, as we include more sources to the ABM system such as BitBucket or Maven, using a specific language for every source would become very cumbersome for our users.We envision a different workflow, where users query projects for their benchmark collection by formulating queries in a domain specific language which is then mapped to the various sources. As some properties such as code metrics are not available from platforms like GitHub, we currently work on another project (Delphi) which provides this information for ABM. Delphi can be treated as another source of information for the workflow project and should be integrated into the new workflow.

Using this new workflow, users receive matching projects regardless of their origin.They will have a richer user experience as more properties for querying are available to choose from. Manual selection after a query was run can be minimized. 

Inexperienced users should be equipped with a query builder to quickly compose queries using a web UI. Users with more experience may enter queries manually. In order to reproduce queries (e.g., for collection maintenance or experiment reproduction) is should be possible to export and import queries and result sets from the system.

As users are keen on receiving a minimal collection matching all of their requirements, it may also be interesting to include an optimization step into the workflow, where the system removes projects from a collection or query result set that are redundant with respect to fulfilling all requirements. This way collections will be reduced in size and evaluations will, therefore, produce faster results for our users.  

Your work will help the implementation of the ABM website. It aids future researchers to safely and correctly evaluate their research prototypes. You will actively assist to raise the quality of international research with your work on the project. 

In short, the tasks you will perform as part of this PG will be the following:

  • Design a new workflow for ABM that minimizes the burden on the user.
  • Design a user interface that matches the new workflow.
  • Add missing functionalities.
  • Implement the changes.
  • Evaluate and test the changes.

Learning outcomes

In this project, you will learn/practise the following:

  • Design the architecture of a web application (software architecture).
  • Design and prototype graphical user interfaces (iterative user-experience design).
  • Web-application development: opportunities to work on a subset of Angular JS on the frontend, Java and Scala on the backend, SQL to query our databases.
  • Test your code (functional automated testing).
  • Design and conduct user studies (usability testing).
  • Document and present an independent part of a bigger project.
  • Collaborate with other teams (University of Alberta - Canada, TU Darmstadt - Germany, Fraunhofer IEM - Germany).


  • Programming experience. This PG will programming-heavy.
  • Good understanding of the Java and JavaScript language
  • Experience with software design and efficient programming.
  • Knowledge of relational database systems.
  • Prior knowledge of JavaEE, AngularJS and Docker is helpful, but not required.


[1]  Stephen M. Blackburn, et al. 2006. The DaCapo benchmarks: java benchmarking development and analysis. In Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications (OOPSLA '06). ACM, New York, NY, USA, 169-190. DOI =

[2] Lisa Nguyen Quang Do, Michael Eichberg, and Eric Bodden. 2016. Toward an automated benchmark management system. In Proceedings of the 5th ACM SIGPLAN International Workshop on State Of the Art in Program Analysis (SOAP 2016). ACM, New York, NY, USA, 13-17. DOI:


To attend the course, you have to register in the PAUL system as a participant.

Contact information:

Lisa Nguyen (

Ben Hermann (