Iwo Kurzidem
Software and Computational Systems Lab
Department of Computer Science
Ludwig-Maximilians-Universität München (LMU Munich)
Oettingenstraße 67
80538
Munich
(Germany)
- Office
- Room F 004, Oettingenstr. 67
[extern]
Fraunhofer IKS (Institute for Cognitive Systems)
Hansastrasse 32
80686 Munich - Office hours
- Wed. 4-5 p.m. (please notify me in advance if you plan to visit)
Outside of this time, by appointment.
Just write me an e-mail, and we can schedule a virtual meeting. - iwo.kurzidem@iks.fraunhofer.de
i.kurzidem@campus.lmu.de - ORCID
- 0009-0006-4706-6113
Thesis Mentoring
Available topics
Analysis of Dissimilarity of ML Classification Models via Statistical Data Clustering Using Zonotopes and/or Polyhedra
This topic is available as a BSc or MSc final thesis. Mentoring is available in English and German.
Goal: In this Master's thesis, we aim to use clustering algorithms to detect systematic errors in Deep Neural Networks (DNNs) by over-approximating the range of outputs for a given set of inputs derived from statistical test data. The clustering algorithms are based on formal methods and include abstractions via intervals, zonotopes, and polyhedra. These methods can model geometric and algebraic properties between input and output ranges, allowing for the analysis of the volume and shape of the high-dimensional input/output space. The objective is to identify and potentially quantify the hypervolume of the high-dimensional zonotopes and/or polyhedra.
Background: Object classification systems based on Deep Neural Networks (DNNs) utilize machine learning (ML) algorithms. However, most of these ML models are trained end-to-end and are, therefore, considered "black boxes." For safety-critical applications, this necessitates additional verification at different levels of abstraction, especially since many properties of data-driven models using ML are only evaluated through statistical testing. This thesis aims to investigate whether clustering available statistical data allows us to make predictions about the fundamental dissimilarity of different ML classification models.
Requirements:
- Practical knowledge of Machine Learning.
- Successfully completed the following courses: Formale Spezifikation und Verifikation (FSV) and Formale Sprachen und Komplexitaet (FSK).
- Ability to program in Python.
Finished topics
Formal Verification of Different Properties of an Explainable Random Forest Model [1, 2]
This topic is available as a BSc thesis and as a MSc research training course with 6/12 ECTS. Mentoring is available in English and German.
Goal: In this Master's Thesis, we want to use formal methods to verify different properties of an explainable ML model (random forest). The goal is to identify which properties of the ML model can be verified, with which approach, and to what degree. If any approach or method is not suitable, we want to find out why and if modifications or adaptations are possible to make it work. ML properties of interest are: Contradictions, identification of sets, logic simplification etc.
Background: Current Autonomous Systems use Machine Learning (ML) based algorithms for object detection, classification, trajectory planning and more. However, most of these ML models are trained end-to-end and are therefore a "black-box". For safety relevant applications this requires an additional verification at different levels of abstraction. Especially explainable ML promises a humanly understandable model, but it is not without its own uncertainties. To solve this, we use formal verification to investigate the properties of a trained RF, provided as JSON file.
Requirements:
- Practical knowledge of decision trees and random forest
- Successfully completed the following courses: Formale Spezifikation und Verifikation (FSV) and Formale Sprachen und Komplexitaet (FSK)
- (optional) Ability to program in Python
Formal Verification of Different Properties of a Random Forest Model [1, 2]
This topic is available as a BSc thesis and as a MSc research training course with 6/12 ECTS. Mentoring is available in English and German.
Goal: In this Master's Thesis, we want to use formal methods to verify different properties of an explainable ML model (random forest). The goal is to identify which properties of the ML model can be verified, with which approach, and to what degree. If any approach or method is not suitable, we want to find out why and if modifications or adaptations are possible to make it work. ML properties of interest are: Contradictions, identification of sets, logic simplification etc.
Background: Current Autonomous Systems use Machine Learning (ML) based algorithms for object detection, classification, trajectory planning and more. However, most of these ML models are trained end-to-end and are therefore a "black-box". For safety relevant applications this requires an additional verification at different levels of abstraction. Especially explainable ML promises a humanly understandable model, but it is not without its own uncertainties. To solve this, we use formal verification to investigate the properties of a trained RF, provided as JSON file.
Requirements:
- Practical knowledge of decision trees and random forest
- Successfully completed the following courses: Formale Spezifikation und Verifikation (FSV) and Formale Sprachen und Komplexitaet (FSK)
- (optional) Ability to program in Python
If you're a student interested in writing your thesis at our chair, you should also have a look at our full list of currently available theses.