Portfolio

Fraunhofer MEVIS Edge Cloud

2022–2024

A screenshot of a local Nomad server showing an empty cluster.
Nomad was one of the services in my responsibility.

At Fraunhofer MEVIS, I spearheaded the continuous development of the on-premise Nomad cluster and the surrounding tool landscape, including Vault and Consul servers, but also custom web applications to allow easy scheduling of common deep learning tasks. Additionally, I conceptuatlized and implemented our infrastructure on top of a managed on-premise OpenStack in close collaboration with our IT, the external service provider, and internal stakeholders.

MeVisLab

2022–2024

A preview of the MeVisLab editor featuring different modules for visualization and analysis of medical images.
The MeVisLab framework and editor by the MeVis Medical Solutions AG.

MeVisLab is an editor and framework to prototype image analysis algorithms. It focuses on medical image processing and includes a node-based editor to quickly evaluate new ideas and create specialized standalone applications. It is mainly developed by MeVis Medical Solutions and Fraunhofer MEVIS contributes modules as well. At Fraunhofer MEVIS, I took part in its development: My main responsibility was the integration of third-party libraries in C++ and Python. I also maintained parts of the build infrastructure at Fraunhofer MEVIS, especially around Conan, which is used to build and cache third-party artifacts. Additionally, I supported users with CMake-related questions and introduced SBOMs for our third-party software.

D(e)ad Jokes

2024

From the perspective of a comedian on stage, we can see two cards with a joke opener and a punchline. The audience is hidden by the cards.
Finding the right story or punchline can be difficult, especially if your audience has different opinions on what they think is funny.

De(a)d jokes is a comedian's nightmare: He needs to find a good joke for an ever demanding audience. I got a chance to learn how to create a pixelated look and feel using a combination of low and high resolution cameras.

Held­innen­aben­teuer

2023

A map of the city of Kellenhusen with a marker at the northern part, close to Kellenhusen's boar pen. In the foreground, a dialog box encourages to walk there to gather some food.
Lohse directs the heroines to the boar pen in Kellenhusen to gather some rations.

I built an AR treasure hunt game as a birthday gift for a "Divinity: Original Sin 2" fan. The game features Lohse helping the heroines to level up by training their heroic skills. The group of heroines solved puzzles and challenges along the two-hour treasure hunt. To work, the game also included a real "potion box" filled with energy drinks, some party hats for armor, a box with chocolate bars as rations, and a dinosaur piñata as the final boss.

Robowars

2023

Three 2D cartoon-style robots drive around an arena and shoot at each other.
Goal of the game was to program the best robot. For that, the robots could sense the world and act upon the information or be controlled by a human.

For a seminar, I implemented a small game engine and game. In Robowars, the participants programmed their own bot in Python to learn and discuss, among other things: Python packaging, containerization, CI/CD, and running containers on Nomad. In order to handle many robots, I replaced Python's xmlrpc library with an asynchronous alternative I implemented with Starlette. The frontend is written with Svelte.

Rothnag'Ka Trk

2023

A game title screenshot with a dark tree looming over an old shed. The game title
The well rooted one requires constant care to stay rooted.

In this game, the main character needs to take care of a tree by gathering water and food. Each item required to take care of the tree can be gathered with a different mini-game, but the tree becomes more demanding over time. I took care of the scene changes and keeping track of the game state, leveraging Unity's SceneLoader.

Deep Understanding of Everyday Activity Commands for Household Robots

2018–2022

A schematic representation of the pipeline to transform natural language commands into an action plan.
The DLU pipeline to transform a sentence into a task plan.

I built the DLU pipeline as part of the EASE project. It takes a natural language command and a scene as an input, parses the utterance into a semantic specification, and uses ontological knowledge and previous experience to create a task plan. This task plan can be simulated before a real robot executes it.
S. Höffner, R. Porzel, M. M. Hedblom, M. Pomarlan, V. S. Cangalovic, J. Pfau, J. Bateman, R. Malaka (2022): Deep Understanding of Everyday Activity Commands for Household Robots. Semantic Web, IOS Press.

Portfolio Chronicle

2020–2021

A screenshot of a newspaper-like website containing some of Sebastian H¶ouml;ffner's past works.
The Portfolio Chronicle – Volume MMXXI No. 2

I created a portfolio in the style of a classic newspaper highlighting some of my achievements and awards in the past. It is now outdated, but I kept it online.

Foundations of the Socio-physical Model of Activities (SOMA) for Autonomous Robotic Agents

2021

A screenshot of SOMA in Protégé.
A screenshot of a part of SOMA in Protégé.

SOMA is an ontology for everyday activities based on the DUL upper model. It extends DUL with different Event types, e.g., States and Accidents, and also contains concepts of typical household items in its extensions. I used SOMA as the basis for DLU.
D. Beßler, R. Porzel, M. Pomarlan, A. Vyas, S. Höffner, M. Beetz, R. Malaka, J. Bateman (2021): Foundations of the Socio-physical Model of Activities (SOMA) for Autonomous Robotic Agents. FOIS 2021.

Initiation

2021

Two magic apprentices are fighting their way through a labyrinth. One of them casts a fire shield, while the other catches up to the enemies.
Two apprentices on their way to the next challenge. One catches up, while the other protects itself from the enemies with a shield spell.

In this multiplayer game, you play a witch or wizard apprentice at the end of their wizard training. But to become part of the inner circle of witches and wizards, you need to survive one last challenge: losing yourself to find wizard wisdom. With each successful task, the apprentices lose one of their five abilities, so that they must cooperate to benefit from all.

Breaking The Experience: Effects of Questionnaires in VR User Studies

2020

In this study we investigated whether taking people out of their virtual experience causes a break in presence, which would eventually lead to different results in questionnaires. We measured skin conductance as a proxy for participant stress and were able to find changes when participants were asked to take off their VR headset. This is an indicator that it makes sense to administer questionnaires in VR instead of outside.
S. Putze, D. Alexandrovsky, F. Putze, S. Höffner, J. D. Smeddinck, R. Malaka (2020): Breaking The Experience: Effects of Questionnaires in VR User Studies. CHI '20, ACM.

Examining Design Choices of Questionnaires in VR User Studies

2020

We investigated different ways of posing questionnaires in VR. We compared different design choices, for example, using a controller similar to laser pointer and other things. To create less biased results, we created a game with a different input modality to be played before answering the questionnaires, in which the players got immersed by shooting some balloons with arrows.
D. Alexandrovsky, S. Putze, M. Bonfert, S. Höffner, P. Michelmann, D. Wenig, R. Malaka, J. D. Smeddinck (2020): Examining Design Choices of Questionnaires in VR User Studies. CHI '20, ACM.

Machine Learning Exercise Sheets

2016–2019

A world map in different colors. Similar colors represent similar attributes. Northern parts of America, central Europe, and Australia share similar orange colors, another group in pink colors are South America and Asia. A third group, consisting mainly of African countries, is shown in shades of blue.
A self-organizing map is used to color a world map, such that similar colors represent similar attributes per country.

Because the Machine Learning world flocked to Python (and the university's MATLAB licenses were not renewed), we translated old exercises from MATLAB to Python and created new and updated exercises for the Machine Learning class. The exercises range from custom implementations of decision trees to multi-layer perceptrons. My favorite implementation are the self-organizing maps as they are a powerful tool to identify similarities in data.

cvloop

2016–2019

An excerpt of a jupyter notebook with a side-by-side view of a still from a video. In the still, some pedestrians walk by on a street. The left still shows the original frame, the right inverted the colors.
cvloop demoed on the OpenCV 768x576.avi sample video. On the left, the original frame is shown, on the right, I apply a color inversion filter.

To allow students to implement Computer Vision algorithms in a more interactive and fun way than on example images or videos, I developed cvloop, a tool to interact with OpenCV video streams from within Jupyter Notebooks. It abstracts away all the frame grabbing and allows to apply functions on the video images – on the whole image or only some parts of it.

Give MEANinGS to Robots with Kitchen Clash: A VR Human Computation Serious Game for World Knowledge Accumulation

2019

The knowledge pipeline introduced in MEANinGS. The VR game Kitchen Clash is used in this paper to gather human behavior data, which I leveraged for the NLP and simulation in DLU.

In MEANinGS, we built a VR game to gather behavior data in a kitchen environment. I used some of the data to create ontological knowledge from experience in DLU.
J. Pfau, R. Porzel, M. Pomarlan, V. S. Cangalovic, S. Grudpan, S. Höffner, J. Bateman, R. Malaka (2019): Give MEANinGS to Robots with Kitchen Clash: A VR Human Computation Serious Game for World Knowledge Accumulation. ICEC-JCSG 2019, Springer.

Trajectory Following in Unity

2019

A simulated cup follows a trajectory denoted with purple spheres from a table to a shelf.
A cup follows a trajectory denoted by purple spheres from a table to a shelf. If you zoom in, you can see very thin lines along the trajectory.

For one of my projects, I implemented a plugin in Unity to allow objects to follow trajectories. To use it, one specifies poses along a trajectory using Unity GameObjects and attaches a script to the object which should follow. The trajectory is interpolated between the points using a Hermite spline. I even handled ease-in/-out and a switch between physics or not. Even though it worked for me, let me kindly suggest to use iTween instead.

Gaze Tracking Using Common Webcams

2018

A GUI with a central area in which a picture of a man is rendered. The GUI also features smaller images of the eyes with marked pupils.
The debug GUI of Gaze offers insights into the model. Here we can see how it detects pupils.

For my master's thesis, I attempted to track gaze only using a laptop camera. Eventually, I was able to track eye centers and whether a persons looks roughly at the upper or lower half of the screen. The center detection is based on an implementation by Tristan Hume. For the gaze estimation, I assumed an idealized head model based on medical measurements and mapped faces onto it using dlib's face landmark detection models.

ANN for depth map estimations

2017

A graphical represenation of the coarse deep learning training graph as built in TensorFlow.
The computational graph of our distributed training.

In this project, we aimed to reimplement two papers about depth estimation from monocular images on the Sun Grid Engine, a process scheduler similar to slurm. We used the parameter server of TensorFlow to allow multiple worker nodes to train simultaneously and share their weights.

Basic Programming in Python Lecture

2017

A still of a lecture recording. Sebastian Höffner is standing in frot of the beamer presentation and points to a slide showing a finite state automaton.
In my lecture on Python programming, I also recapped FSA to introduce regular expressions.

During my master's degree I taught a class on Python programming. The lecture was tailored towards students who started their master's program but had no previous training in programming. Other students also joined as Python was not taught in the Computer Science classes but very relevant for classes such as Machine Learning or Computer Vision. I created all lecture materials, delivered the lecture, and offered a weekly walk-in practice session to assist in the homework exercises. My teaching performance was rated among the top 10% at the university in that year.

Wasabi

2015–2016

The Wasabi administration interface.
The bookmark feature of Wasabi allows to sort bookmarked experiments to the top.

My first major professional project was Wasabi at Intuit. As part of the Intuit Data Engineering and Analytics team, I implemented several features in the A/B testing service, including experiment bookmarking, audit logs, and data versioning for the ETL pipeline.

Lecture Notes: Probabilistic Modeling of Perception and Cognition

2014

A figure from the manuscript. It contains two plots, on the left a cumulative distribution function and on the right the corresponding probability distribution function. In both cass, the most likely 95% of the distribution are marked.
A comparison between a CDF and PDF from the script, rendered with pgfplots.

The lecture PMPC had no script available. So we set out to share our notes and exercise solutions – the result was an almost complete script of the full semester, including the exercises and solutions. This was a perfect opportunity to improve not only our modeling and probability theory skills, but also our LaTeX and writing ability.

Probabilistic Robot Localization in Continuous 3D Maps

2014

A wireframe representation of a room filled with multiple small red arrows. A bigger blue and green arrow are floating slightly off the center of the room, three green arrow a meter above the blue one. The red arrows point into random directions, the green and blue arrows roughly into the same.
To adjust the green arrow to match the blue arrow as closely as possible, a number of random samples, the red arrows, is drawn. By simulating the expected sensor input for the given poses and comparing it to the actual sensor readings, the estimated pose can be updated.

In my bachelor's thesis, I implemented a particle filter for probabilistic robot localization in 3D. To localize itself in a known map, the robot uses a laser scanner and compares sampled poses in the map with its sensor data to select the most likely candidate. The algorithm is based on the classical AMCL2D. I wrote the implementation in C++ for ROS.

Fire Simulation

2013

A simulated flame flickers in front of a brick wall. The wall is illuminated accordingly and the heat causes distortion above and around the flame.
To give our simulated flame a more lively feel, we let it emit light to illumate the walls around it.

The fire particle simulation we built in the parallel algorithms practical course directs particles with low-pressure areas to form a cone-like structure. It also features heat flicker, causing distortions above the visible flame.

Water Rendering

2012

Two stills of the same scene: A renderd water basin and river. On the left, the simulated water particles are rendered as solid blue spheres, on the right with shading, transparency, occlusion and other effects to give it a realistic look and feel.
We created a more realistic look and feel for the water particles flowing down the riverbed.

In the practical course for computer graphics, my team received the positions and velocities of simulated water particles in a terrain. We used the terrain and particle information to render a water surface using deferred shading.