Technische Universität Braunschweig
  • Study & Teaching
    • Beginning your Studies
      • Prospective Students
      • Degree Programmes
      • Application
      • Fit4TU
      • Why Braunschweig?
    • During your Studies
      • Fresher's Hub
      • Term Dates
      • Courses
      • Practical Information
      • Beratungsnavi
      • Additional Qualifications
      • Financing and Costs
      • Special Circumstances
      • Health and Well-being
      • Campus life
    • At the End of your Studies
      • Discontinuation and Credentials Certification
      • After graduation
      • Alumni
    • For Teaching Staff
      • Strategy, Offers and Information
      • Learning Management System Stud.IP
    • Contact
      • Study Service Centre
      • Academic Advice Service
      • Student Office
      • Career Service
  • Research
    • Research Profile
      • Core Research Areas
      • Clusters of Excellence at TU Braunschweig
      • Research Projects
      • Research Centres
      • Professors‘ Research Profiles
    • Early Career Researchers
      • Support in the early stages of an academic career
      • PhD-Students
      • Postdocs
      • Junior research group leaders
      • Junior Professorship and Tenure-Track
      • Habilitation
      • Service Offers for Scientists
    • Research Data & Transparency
      • Transparency in Research
      • Research Data
      • Open Access Strategy
      • Digital Research Announcement
    • Research Funding
      • Research Funding Network
      • Research funding
    • Contact
      • Research Services
      • Academy for Graduates
  • International
    • International Students
      • Why Braunschweig?
      • Degree seeking students
      • Exchange Studies
      • TU Braunschweig Summer School
      • Refugees
      • International Student Support
      • International Career Service
    • Going Abroad
      • Studying abroad
      • Internships abroad
      • Teaching and research abroad
      • Working abroad
    • International Researchers
      • Welcome Support for International Researchers
      • Service for Host Institutes
    • Language and intercultural competence training
      • Learning German
      • Learning Foreign Languages
      • Intercultural Communication
    • International Profile
      • Internationalisation
      • International Cooperations
      • Strategic partnerships
      • International networks
    • International House
      • About us
      • Contact & Office Hours
      • News and Events
      • International Days
      • 5th Student Conference: Internationalisation of Higher Education
      • Newsletter, Podcast & Videos
      • Job Advertisements
  • TU Braunschweig
    • Our Profile
      • Aims & Values
      • Regulations and Guidelines
      • Alliances & Partners
      • The University Development Initiative 2030
      • Ecoversity – the TU Braunschweig as a university ecosystem
      • Facts & Figures
      • Our History
    • Career
      • Working at TU Braunschweig
      • Vacancies
    • Economy & Business
      • Entrepreneurship
      • Friends & Supporters
    • General Public
      • Check-in for Students
      • CampusXperience
      • The Student House
      • Access to the University Library
    • Media Services
      • Communications and Press Service
      • Services for media
      • Film and photo permits
      • Advices for scientists
      • Topics and stories
    • Contact
      • General Contact
      • Getting here
  • Organisation
    • Presidency & Administration
      • Executive Board
      • Designated Offices
      • Administration
      • Committees
    • Faculties
      • Carl-Friedrich-Gauß-Fakultät
      • Faculty of Life Sciences
      • Faculty of Architecture, Civil Engineering and Environmental Sciences
      • Faculty of Mechanical Engineering
      • Faculty of Electrical Engineering, Information Technology, Physics
      • Faculty of Humanities and Education
    • Institutes
      • Institutes from A to Z
    • Facilities
      • University Library
      • Gauß-IT-Zentrum
      • Professional and Personnel Development
      • International House
      • The Project House of the TU Braunschweig
      • Transfer Service
      • University Sports Center
      • Facilities from A to Z
    • Equal Opportunity Office
      • Equal Opportunity Office
      • Family
      • Diversity for Students
  • Search
  • Quicklinks
    • People Search
    • Webmail
    • cloud.TU Braunschweig
    • Messenger
    • Cafeteria
    • Courses
    • Stud.IP
    • Library Catalogue
    • IT Services
    • Information Portal (employees)
    • Link Collection
    • DE
    • EN
    • Instagram
    • YouTube
    • LinkedIn
    • Mastodon
    • Bluesky
Menu
  • Organisation
  • Faculties
  • Carl-Friedrich-Gauß-Fakultät
  • Institutes
  • Institute of Software Engineering and Automotive Informatics
  • Teaching
Logo Institut für Softwaretechnik und Fahrzeuginformatik der TU Braunschweig
Theses and Projects
  • Teaching
    • Software Engineering 1
    • Software Engineering 2
    • Software Development Project
    • Team Project
    • Software Product Lines
    • Constraint Solving
    • Ramp Up Course Computer Science
    • IT law: Contract law and Liability law
    • Industrial Software Development Management
    • Bachelor Seminar
    • Master Seminar
    • Theses and Projects

Theses and Projects

Topic Presentation winter term 2025

We present all current topics for projects and Bachelor's and Master's theses at the ISF. In addition, we will briefly present our teaching offer for the winter term 2025. All interested parties are cordially invited.


When? Tuesday, 15.07.2025, from 16:45 to 18:15

Where? PK 11.2

The slides with the topics will be published here on the same day.

Topic Presentations (past semesters)

SoSe 2025

We present all current topics for projects and Bachelor's and Master's theses at the ISF. In addition, we will briefly present our teaching offer for the summer semester 2025. All interested parties are cordially invited.


When? Monday, 27.01.2025, from 16:45 to 18:15

Where? IZ 161 and in the Webex (hybrid)

The slides with the topics will be published here on the same day.

  • Teaching in Summer Term 2025 and Open Theses Topics

Topics

We regularly update the following list of topics. We are also open to your own topic suggestions.

Looking for a teamproject? Here you can find information on the current teamproject at ISF.

 

Contact

If you are interested and/or have any questions, feel free to contact the responsible member. 

They can then further discuss the topic with you and try to adapt it to your personal preferences.

Legend

Shorthand Full
P Project
B Bachelor's Thesis
M Master's Thesis

Software Product Line Reengineering

Analyzing the Research Workflow in Python Research Scripts (P/M)

Analyzing the Research Workflow in Python Research Scripts

Context

While every Data Science research script has an input and an output at some point, there are multiple steps in between that are used to transform the provided data. Given a big enough sample, it should be possible to derive a workflow that most scientists adhere to and compare it with suggested workflows described in the literature [1].

This workflow can then be used for further analyses, such as the number of function calls for each stage in the workflow. To facilitate that, we need to create a mapping of popular data science functions to those identified stages.

A short example on how function calls are annotated to a stage in the workflow can be found here:

sample <- read.csv("sample.csv", sep = ";") #import

plot(sample$var1 ~ sample$var2, pch = 20, col = "grey58", ylim = c(0, 1), xlim = c(0, 1)) #visualize

abline(lm(sample$var1 ~ sample$var2)) #visualize


Research Problem

In this work, we want to explore if there is a common workflow across disciplines (such as Chemistry[2], Biology, Social Sciences[3], etc.) that high-grade papers adhere to. By exploring outstanding journals and conferences in that field, we want to collect samples of the way they structure their scripts. The derived workflow is then compared to literature on proposed workflows to check if there is any overlap. Furthermore, we want to create a mapping for the functions used in the process to their respective stage in the derived workflow.


Tasks

  • Identify a set of conferences/journals as a basis for a literature review.
  • Collect recent publications that do data science in python from this set.
  • Derive a multi-stage workflow and compare it to literature on data science workflows.
  • Identify popular libraries that are used and map their functions to the stages of your workflow.

Related Work and Further Reading

[1] Huber, F. (2025). Hands-on Introduction to Data Science with Python. v0.23, 2025, Zenodo. https://doi.org/10.5281/zenodo.10074474

[2] Davila-Santiago, E.; Shi, C.; Mahadwar, G.; Medeghini, B.; Insinga, L.; Hutchinson, R.; Good, S.; Jones, G. D. Machine learning applications for chemical fingerprinting and environmental source tracking using non-target chemical data. Environ. Sci. Technol. 2022, 56 (7), 4080–4090. DOI: 10.1021/acs.est.1c06655.

[3] Di Sotto S, Viviani M. Health Misinformation Detection in the Social Web: An Overview and a Data Science Approach. International Journal of Environmental Research and Public Health. 2022; 19(4):2173. https://doi.org/10.3390/ijerph19042173


Contact

Ruben Dunkel

Reengineering of R Research Scripts using gardenR (P/M)

Reengineering of R Research Scripts using gardenR

Context

Data Science in R comes with a certain fallacy. Most of the time, a script is created for a single publication and then left to rot. Additionally, most of the published R scripts are in no state to reproduce results [1]. To combat this single use practice, the gardenR tool has been created. gardenR uses a set of predefined functions calls to create a dependency graph using program slicing, which is then converted into a Software Product Line. While the application has been tested on research data, there has been no evaluation on whether the annotated SPL meets expectations of researchers in the field yet.


Research Problem

In this work, we want to collect publications of recent R Data Science scripts, that are then annotated through gardenR. Those scripts are then hosted on a website of your creation that allows to select configuration of the corresponding Software Product Line, which are created by execution of C Preprocessor functionality on the client-side [2]. The generated variant can then be downloaded. Finally, want to contact the authors of the publication and present our annotated version, including a visual representation of their script. In an interview, the researcher is then questioned on the potential they see in the annotated code and voice further ideas for improvement.


Tasks

  • collect recent R Data Science scripts.
  • annotate them through gardenR into an SPL.
  • create an online configurator that allows for the annotated scripts to be turned into a selected variant using C Preprocessor statements.
  • conduct interviews with the researcher that published the script on the usability of the annotated script and its derivatives.

Related Work and Further Reading

[1] Vidoni, Melina. "Software engineering and r programming: A call for research." (2021).

[2] https://gcc.gnu.org/onlinedocs/cpp/Ifdef.html


Contact

Ruben Dunkel

Overview on the Usage of Programming Languages in Data Science (P/B/M)

Overview on the Usage of Programming Languages in Data Science

Context

With empirical science creating large sets of data, the discipline of data science is more important than ever to wrangle conclusions from those heaps of unstructured data [1]. There are several popular languages used in Data Science such as Python, R or Julia. While they are generally seen as the most prevalent, there is no data on how popular they are in different disciplines (such as Chemistry, Biology, Social Sciences, etc.).


Research Problem

We want to create a comprehensive overview over the distribution of programming languages and libraries in the different fields of research. By comparing the use and distribution we want to gain insight on what the current stack of tools used for data science looks like.


Tasks

  • Create or modify a tool that accesses zenodo and stores data science artifacts by field of research.
  • Analyze the artifacts on which language/framework/libraries are used and extract the provided functions of the libraries.
  • Create rankings for language/framework/libraries by field of research.

Related Work and Further Reading

[1] Van Der Aalst, Wil. "Process mining: Overview and opportunities." ACM Transactions on Management Information Systems (TMIS) 3.2 (2012): 1–17.


Contact

Ruben Dunkel

Feature Model Features

Exploring the Usage of Feature Models for Feature Model Analysis Benchmarking (P/B/M)

Exploring the Usage of Feature Models for Feature Model Analysis Benchmarking

Context

New algorithms and approaches for feature model analysis are typically analyzed empirically for their publication. This process requires feature models that can be used for benchmarking the evaluated algorithm. However, it is unclear which feature models are used in which publication and how impactful the selection is on the results of the evaluation.


Research Problem

Extend the existing feature model benchmark with new feature models and track their usage in existing publications. Identify peculiarities in the generated data set and try to explain found correlations.


Tasks

  1. Extend the feature model benchmark for newer publications (2023 till now)
  2. Extract which paper uses which feature model in its analysis
  3. Analyze the usage behavior of feature models in feature model analyis benchmarking and identify correlations and outliers

Related Work and Further Reading

  • Chico Sundermann, Vincenzo Francesco Brancaccio, Elias Kuiter, Sebastian Krieter, Tobias Heß, and Thomas Thüm. 2024. Collecting Feature Models from the Literature: A Comprehensive Dataset for Benchmarking. In Proceedings of the 28th ACM International Systems and Software Product Line Conference (SPLC '24). Association for Computing Machinery, New York, NY, USA, 54–65. https://doi.org/10.1145/3646548.3672590
  • github.com/SoftVarE-Group/feature-model-benchmark

Contact

Raphael Dunkel

Analyzing the Reproduciblity of Feature Model Analysis Evaluations (P/B/M)

Analyzing the Reproduciblity of Feature Model Analysis Evaluations

Context

New algorithms and approaches for feature model analysis are typically analyzed empirically for their publication. Replicating these results is important to validate research findings, ensure scientific integrity, and allow for the re-usage of tools in further research. However, often the evaluation is not easily reproducible because of missing data or broken tooling.


Research Problem

Reproduce existing research in the context of feature model analysis by generating functioning replication packages and partially re-computing their evaluations. Further try to reproduce these findings on new and unused feature models.


Tasks

  1. Select relevant studies that evaluated the performance of a feature model analysis algorithm (criteria may be provided)

  2. Reproduce the selected studies

  3. Replicate the selected studies on a small subset of new feature models


Related Work and Further Reading

  • Chico Sundermann, Vincenzo Francesco Brancaccio, Elias Kuiter, Sebastian Krieter, Tobias Heß, and Thomas Thüm. 2024. Collecting Feature Models from the Literature: A Comprehensive Dataset for Benchmarking. In Proceedings of the 28th ACM International Systems and Software Product Line Conference (SPLC '24). Association for Computing Machinery, New York, NY, USA, 54–65. https://doi.org/10.1145/3646548.3672590

  • Carver, J.C., Juristo, N., Baldassarre, M.T. et al. 2014. Replications of software engineering experiments. Empir Software Eng 19, 267–276. doi.org/10.1007/s10664-013-9290-8


Contact

Raphael Dunkel

Exploring Effort-aware Feature Selection for Machine Learning on Feature Models (P/B/M)

Exploring Effort-aware Feature Selection for Machine Learning on Feature Models

Context

Feature selection is an important step in modern machine learning pipelines. It helps to keep machine learning models simple, helps with interpretability, and can even improve model performance in some cases. However, difficult to compute feature sets require high time investments, which is especially bad for time-critical applications. Effort-aware feature selection tries to address this issues by not only optimizing the performance of the trained ML models, but also the computative expensiveness of the selected features.


Research Problem

Select and evaluate different effort-aware feature selection strategies in the context of machine learning on feature models.


Tasks

  1. Systematically select effort-aware feature selection techniques that represent different approaches to this problem
  2. Integrate/Implement the selected approaches into fe4femo, a feature engineering framework for feature models
  3. Evaluate the effectiveness and trade-offs of the implemented techniques on the feature model benchmark

Related Work and Further Reading

  • Isabelle Guyon, Steve Gunn, Masoud Nikravesh, and Lotfi A. Zadeh. 2006. Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing). Springer-Verlag, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-35488-8

  • Hamdani, T.M., Won, JM., Alimi, A.M., Karray, F. 2007. Multi-objective Feature Selection with NSGA II. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2007. Lecture Notes in Computer Science, vol 4431. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-71618-1_27  

  • Yong Zhang, Dun-wei Gong, and Jian Cheng. 2017. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification. IEEE/ACM Trans. Comput. Biol. Bioinformatics 14, 1 (January 2017), 64–75. https://doi.org/10.1109/TCBB.2015.2476796 


Contact

Raphael Dunkel

Exploring the Impact of Feature Transformations for Machine Learning on Feature Models (P/B/M)

Exploring the Impact of Feature Transformations for Machine Learning on Feature Models

Context

Feature transformation is an important step in modern machine learning pipelines. It can improve model performance by for example removing collinearity or make model interpretation easier, e.g., through dimensionality reduction of the feature space. However, the impact and trade-off between various feature transformation techniques has not yet been evaluated in the context of machine learning on feature models.


Research Problem

Select and evaluate different feature transformation strategies in the context of machine learning on feature models.


Tasks

  1. Systematically select feature transformation techniques that represent different approaches to this problem

  2. Integrate/Implement the selected approaches into fe4femo, a feature engineering framework for feature models

  3. Evaluate the effectiveness of the implemented techniques on the feature model benchmark


Related Work and Further Reading

  • Kuhn, M., Johnson, K. 2013. Data Pre-processing. In: Applied Predictive Modeling. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6849-3_3

  • Liu, Huan, and Hiroshi Motoda. 1998. Feature transformation and subset selection. In IEEE Intell Syst Their Appl 13.2, 26-28. https://doi.org/10.1109/MIS.1998.671088

  • Wang, Yasi and Yao, Hongxun and Zhao, Sicheng. 2015. Auto-Encoder Based Dimensionality Reduction. Neurocomputing. 184. https://doi.org/10.1016/j.neucom.2015.08.104 


Contact

Raphael Dunkel

Linux Kernel Configurations

Linux Kernel Configurations

Evaluating Translations of Linux Kernel Configurations

Context
The Linux kernel is one of the largest feature-oriented software systems in the public domain and a focus of much software product line (SPL) research. Its high complexity stems from the fact that the Linux kernel can be configured for many diverse use cases, ranging from embedded systems to high-performance computing. However, the same complexity that makes it versatile also often means that analyzing the Linux kernel in an SPL context is hard.

The underlying feature model of the Linux kernel is defined through the dedicated configuration language KConfig. Any instance of the kernel is built using a configuration file, which defines the features to be included in the compiled kernel. Applying SPL algorithms (e.g., sampling) to the Linux kernel requires translating the KConfig model into a boolean feature model formula. Due to peculiarities of the Kconfig language for Linux variability, such translations generally differ depending on the implementation.

Research Problem

There are different solutions for translating a specific Linux kernel version into a boolean feature model formula, as well as for translating a kernel configuration to a corresponding assignment. The other direction - translating a formula assignment to a kernel configuration - is rarely considered in research. However, to use SPL algorithms that work on feature models for Linux, the translation must work in both directions. Hence, there is a need to evaluate whether existing translations of Linux kernel versions into boolean feature model formulas accurately handle configurations, and whether the results can be translated back into Linux configurations.

Tasks

  1. Compile an overview of Linux kernel feature model translations
  2. Analyze their behavior when translating configurations and non-boolean features
  3. Devise and evaluate a back-translation of formula assignments to Linux configurations

Related Work and Further Reading

David Fernandez-Amoros, Ruben Heradio, Christoph Mayr-Dorn, and Alexander Egyed. 2019. A Kconfig Translation to Logic with One-Way Validation System. In Proceedings of the 23rd International Systems and Software Product Line Conference - Volume A (SPLC '19). Association for Computing Machinery, New York, NY, USA, 303–308. https://doi.org/10.1145/3336294.3336313

Elias Kuiter, Chico Sundermann, Thomas Thüm, Tobias Hess, Sebastian Krieter, and Gunter Saake. 2025. How Configurable is the Linux Kernel? Analyzing Two Decades of Feature-Model History. ACM Trans. Softw. Eng. Methodol. Just Accepted (April 2025). https://doi.org/10.1145/3729423

KConfig Language Documentation: https://docs.kernel.org/kbuild/kconfig-language.html

Contact

christopher.rau(at)tu-braunschweig.de

Feature Mappings

Feature Mapping Fixes in SPL Histories

Feature Mapping Fixes in SPL Histories

Context

Mistakes in feature mappings can easily be made when having multiple developers working on the same product line. Potential reasons are different understandings of artifacts belonging to the feature or missing communication.


Research Problem

How many of the fixes made in a product line are feature mapping fixes? How do feature mapping fixes appear in real-world product lines? What different kinds of feature mapping fixes do exist?


Tasks

  1. Analyze existing product line version control histories.
  2. Find and define different feature mapping fixes

Related Work and Further Reading

  • Paul Maximilian Bittner, Alexander Schultheiß, Benjamin Moosherr, Timo Kehrer, Thomas Thüm. 2024. Variability-Aware Differencing with DiffDetective. In Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering. doi.org/10.1145/3663529.3663813
  • Paul Maximilian Bittner, Christof Tinnes, Alexander Schultheiß, Sören Viegener, Timo Kehrer, Thomas Thüm. 2022. Classifying Edits to Variability in Source Code. In Proceedings of the 30th SCM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE '22). https://doi.org/10.1145/3540250.3549108 

Contact

Rahel Sundermann

Logics and Solvers for Uncertainty in Feature Mappings

Logics and Solvers for Uncertainty in Feature Mappings

Context

Various analyses of configurable systems rely on automated reasoning. However, it is unclear how to capture the influence of uncertainty on those analyses. Uncertainty needs to be modeled and solved. To this end, we need a fitting logic (e.g., subjective logic) with existing support (e.g., solver).


Research Problem

What kinds of logics are best suited for our use case of modeling uncertainty? How well are these logics supported?


Tasks

  1. Survey the literature for formalisms (e.g., kinds of logic) that support integrating uncertainty.
  2. Gather available solvers for the identified formalisms.
  3. Empirically evaluate the gathered solvers on feature mappings.

Related Work and Further Reading

  • Audun Jøsang. 2001. A Logic for Uncertain Probabilities. In International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. https://doi.org/10.1142/S0218488501000831
  • Emad Saad. 2009. Probabilistic Reasoning by SAT Solvers. In European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty. doi.org/10.1007/978-3-642-02906-6\_57

Contact

Rahel Sundermann

Knowledge Compilation

Automated Reasoning with Currently Unexplored Formats (B/M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks. Popular formats are d-DNNFs [1] or BDDs [2]. Both have been successfully applied in product-line engineering to substantially accelerate practice-relevant analyses.


Research Problem

The promising results of the already applied knowledge-compilation formats [3, 4], motivates the exploration of further formats. Many formats have been suggested but reusing them for automated reasoning has not or only sparsely been explored.


Tasks

  1. Inspect available unexplored formats 

  2. Develop operations to enable feature-model analyses

  3. Implement prototype

  4. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Bryant, R.E. (2018). Binary Decision Diagrams. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds) Handbook of Model Checking. Springer, Cham. https://doi.org/10.1007/978-3-319-10575-8_7
[3] Darwiche, Adnan, and Pierre Marquis. "A knowledge compilation map." Journal of Artificial Intelligence Research 17 (2002): 229-264., https://doi.org/10.1613/jair.989
[4] Sundermann, C., Kuiter, E., Heß, T. et al. On the benefits of knowledge compilation for feature-model analyses. Ann Math Artif Intell 92, 1013–1050 (2024). doi.org/10.1007/s10472-023-09906-6

Knowledge Compilation Beyond Boolean Logic (M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks. Popular formats are d-DNNFs [1] or BDDs [2]. Both have been successfully applied in product-line engineering to substantially accelerate practice-relevant analyses.


Research Problem

While available knowledge compilation strategies all appear to focus on propositional logic, many problems in practice rely on more expressive constraints (e.g., with numeric variables). Extending knowledge compilation to cope with such expressive constraints could yield substantial runtime benefits.


Tasks

  1. Design beyond-propositional target language

  2. Develop compilation from feature model

  3. Implement prototype

  4. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Bryant, R.E. (2018). Binary Decision Diagrams. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds) Handbook of Model Checking. Springer, Cham. https://doi.org/10.1007/978-3-319-10575-8_7
[3] Darwiche, Adnan, and Pierre Marquis. "A knowledge compilation map." Journal of Artificial Intelligence Research 17 (2002): 229-264., https://doi.org/10.1613/jair.989
[4] Sundermann, C., Kuiter, E., Heß, T. et al. On the benefits of knowledge compilation for feature-model analyses. Ann Math Artif Intell 92, 1013–1050 (2024). doi.org/10.1007/s10472-023-09906-6

 

Variational d-DNNFs (M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks. The deterministic decomposable negation normal form (d-DNNF) is a format that has been succesfully applied in various domains including feature-model analysis.

Research Problem

In many cases, slight deviations of a feature models need to be analyzed. With current techniques, a whole new d-DNNF has to be compiled inducing immense computational efforts.


Tasks

  1. Develop mechanisms to include similar feature-model variants in a single d-DNNF

  2. Develop compilation strategy

  3. Adapt reasoning algorithms on the resulting d-DNNF

  4. Implement prototype

  5. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Chico Sundermann, Heiko Raab, Tobias Heß, Thomas Thüm, and Ina Schaefer. 2024. Reusing d-DNNFs for Efficient Feature-Model Counting. ACM Trans. Softw. Eng. Methodol. 33, 8, Article 208 (November 2024), 32 pages. doi.org/10.1145/3680465

Configuration Counting

Approximate #SAT Solving (B/M/P)

Context

Configuration counting refers to computing the number of valid configurations for a given feature model, which enables a plethora of automated analyses [1]. To enable these analyses, configuration counting is often reduced to #SAT (i.e., propositional model counting).


Research Problem

#SAT is a computationally complex problem which often induces unacctepable runtimes for large product lines in practice. Sometimes, a valid option to enable analyses may be approximating the number of valid configurations, but available approximate #SAT solvers fail to scale for product-line instances [2].


Tasks

  1. Gather effective simplifications

  2. Identify promising approximations

  3. Implement prototype

  4. Compare advances to state of the art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Chico Sundermann, Michael Nieke, Paul M. Bittner, Tobias Heß, Thomas Thüm, and Ina Schaefer. 2021. Applications of #SAT Solvers on Feature Models. In Proceedings of the 15th International Working Conference on Variability Modelling of Software-Intensive Systems (VaMoS '21). Association for Computing Machinery, New York, NY, USA, Article 12, 1–10. https://doi.org/10.1145/3442391.3442404
[2] Sundermann, C., Heß, T., Nieke, M. et al. Evaluating state-of-the-art # SAT solvers on industrial configuration spaces. Empir Software Eng 28, 29 (2023). doi.org/10.1007/s10664-022-10265-9

Universal Variability Language

Efficient Conversion Strategies for the Universal Variability Language (B/M/P)

Context

The Universal Variability Language (UVL) is a format for specifying feature models [1]. It is developed as a community effort from researchers around the globe [2]. The adoption of UVL and its tooling landscape are continously growing. UVL has an extensible language design that allow users to select a subsets of available feature-modeling constructs.


Research Problem

One goal of UVL is enabling exchange between different tools and users. To this end, different feature-modeling constructs that are part of an extension can be translated to simpler constructs with conversion strategies. However, the currently employed conversion strategies are often inefficient and fail to scale for complex feature models.


Tasks

  1. Identify existing (but not yet applied) conversions

  2. Develop missing conversions

  3. Implement prototype

  4. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] David Benavides, Chico Sundermann, Kevin Feichtinger, José A. Galindo, Rick Rabiser, Thomas Thüm, UVL: Feature modelling with the Universal Variability Language, Journal of Systems and Software, Elsevier, doi.org/10.1016/j.jss.2024.112326
[2] https://universal-variability-language.github.io/

Reasoning Recommender System (B/M/P)

Context

The Universal Variability Language (UVL) is a format for specifying feature models [1]. It is developed as a community effort from researchers around the globe [2]. The adoption of UVL and its tooling landscape are continously growing. UVL has an extensible language design that allow users to select a subsets of available feature-modeling constructs.


Research Problem

Depending on the selected UVL extension, different reasoning engines (i.e., tools enabling automated analysis) are (1) applicable and (2) promising regarding efficiency. Often, it is unclear which reasoning engine to use for a problem at hand for the users.


Tasks

  1. Collect promising off-the-shelf solutions

  2. Provide a mapping between solutions and extensions

  3. Develop concepts for missing solutions

  4. Implement a prototype recommender system

  5. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] David Benavides, Chico Sundermann, Kevin Feichtinger, José A. Galindo, Rick Rabiser, Thomas Thüm, UVL: Feature modelling with the Universal Variability Language, Journal of Systems and Software, Elsevier, doi.org/10.1016/j.jss.2024.112326
[2] https://universal-variability-language.github.io/

Photo credits on this page

For All Visitors

Vacancies of TU Braunschweig
Career Service' Job Exchange 
Merchandising

For Students

Term Dates
Courses
Degree Programmes
Information for Freshman
TUCard

Internal Tools

Glossary (GER-EN)
Change your Personal Data

Contact

Technische Universität Braunschweig
Universitätsplatz 2
38106 Braunschweig

P. O. Box: 38092 Braunschweig
GERMANY

Phone: +49 (0) 531 391-0

Getting here

© Technische Universität Braunschweig
Legal Notice Privacy Accessibility

TU Braunschweig uses the software Matomo for anonymised web analysis. The data serve to optimise the web offer.
You can find more information in our data protection declaration.