Doctoral Colloquia

PhD Students watching a presentation at a DAI Doctoral Colloquium
Project TERAIS

Doctoral colloquia are a platform for PhD students at DAI to present their research a wider departmental audience, exchange ideas, and foster friendships. They are organized on a weekly basis during the semester by assoc. prof. Damas Gruska. They are among the activities within the TERAIS project, aimed at elevating DAI to a workplace of international academic excellence.

When
weekly on Mondays at 13:10 (during the teaching part of the semester)
Where
I-9 and online in MS Teams

Recap of the first semester of Doctoral Colloquia (Summer 2023)

Upcoming presentations

Peter Anthony presenting a DAI Doctoral Colloquium

František Dráček: Anomaly detection from TLE data

The burgeoning number of objects in Low Earth Orbit (LEO) poses a critical challenge for satellite operators, demanding robust collision avoidance measures. Although extensive databases track these objects, identifying anomalies within this data is crucial. This research investigates the application of machine learning methods to automatically detect anomalies in satellite data, potentially enhancing space situational awareness and safeguarding future space operations.

Presentations Plan
PhD Student Date
Peter Anthony 26. 2.
František Dráček 4. 3.
Daniel Kyselica 11. 3.
Elena Štefancová 18. 3.
Janka Boborová 25. 3.
Marek Šuppa 8. 4.
Radovan Gregor 15. 4.
Fatana Jafari 22. 4.
Filip Kerák 29. 4.
Pavol Kollár 6. 5.
Ján Pastorek 13. 5.

Past presentations

Summer semester 2023/24

Peter Anthony: Tailoring Logic Explained Network for a Robust Explainable Malware Detection

Peter Anthony presenting a DAI Doctoral Colloquium

The field of malware research faces persistent challenges in adopting machine learning solutions due to issues of low generalization and a lack of explainability. While deep learning, particularly artificial neural networks, has shown promise in addressing the generalization problem, their inherent black-box nature poses challenges in providing explicit explanations for predictions. On the other hand, interpretable machine learning models, such as linear regression and decision trees, prioritize transparency but often sacrifice performance. In this work, to address the imperative needs of robustness and explainability in cybersecurity, we propose the application of a recently proposed interpretable-by-design neural network - Logic Explain Network (LEN) to the complex landscape of malware detection. We investigated the effectiveness of LEN in discriminating malware and providing meaningful explanations and evaluate the quality of the explanations over increasing feature size based on fidelity and other standard metrics. Additionally we introduce an improvement on the simplification approach for the global explanation. Our analysis were carried out using static malware features provided by the EMBER dataset. The experimental results shows LEN’s discriminating performance is competitive with Blackbox deep learning models. LEN's Explanations demonstrated high fidelity, indicating genuine reflections of the model's inner workings. However, a notable trade-off between explanation fidelity and compactness is identified.

Winter semester 2023/24

To appear.


Summer semester 2022/23

To appear.