Doctoral Colloquia

PhD Students watching a presentation at a DAI Doctoral Colloquium
Project TERAIS

Doctoral colloquia are a platform for PhD students at DAI to present their research a wider departmental audience, exchange ideas, and foster friendships. They are organized on a weekly basis during the semester by assoc. prof. Damas Gruska. They are among the activities within the TERAIS project, aimed at elevating DAI to a workplace of international academic excellence.

When
weekly on Mondays at 13:10 (during the teaching part of the semester)
Where
I-9 and online in MS Teams

Recap of the first semester of Doctoral Colloquia (Summer 2023)

Upcoming presentations

František Dráček: Anomaly detection from TLE data

František Dráček's photo

The burgeoning number of objects in Low Earth Orbit (LEO) poses a critical challenge for satellite operators, demanding robust collision avoidance measures. Although extensive databases track these objects, identifying anomalies within this data is crucial. This research investigates the application of machine learning methods to automatically detect anomalies in satellite data, potentially enhancing space situational awareness and safeguarding future space operations.

Plan for this semester

Presentations Plan
PhD Student Date
Peter Anthony 26 Feb
František Dráček 4 Mar
Daniel Kyselica 11 Mar
Filip Kerák 18 Mar
Janka Boborová 25 Mar
Marek Šuppa 8 Apr
Radovan Gregor 15 Apr
Fatana Jafari 22 Apr
Elena Štefancová 29 Apr
Pavol Kollár 6 May
Ján Pastorek 13 May

Past presentations

Summer semester 2023/24

Peter Anthony: Tailoring Logic Explained Network for a Robust Explainable Malware Detection

Peter Anthony presenting a DAI Doctoral Colloquium

The field of malware research faces persistent challenges in adopting machine learning solutions due to issues of low generalization and a lack of explainability. While deep learning, particularly artificial neural networks, has shown promise in addressing the generalization problem, their inherent black-box nature poses challenges in providing explicit explanations for predictions. On the other hand, interpretable machine learning models, such as linear regression and decision trees, prioritize transparency but often sacrifice performance. In this work, to address the imperative needs of robustness and explainability in cybersecurity, we propose the application of a recently proposed interpretable-by-design neural network - Logic Explain Network (LEN) to the complex landscape of malware detection. We investigated the effectiveness of LEN in discriminating malware and providing meaningful explanations and evaluate the quality of the explanations over increasing feature size based on fidelity and other standard metrics. Additionally we introduce an improvement on the simplification approach for the global explanation. Our analysis were carried out using static malware features provided by the EMBER dataset. The experimental results shows LEN’s discriminating performance is competitive with Blackbox deep learning models. LEN's Explanations demonstrated high fidelity, indicating genuine reflections of the model's inner workings. However, a notable trade-off between explanation fidelity and compactness is identified.

Winter semester 2023/24

To appear.


Summer semester 2022/23

To appear.