Doctoral Colloquia

PhD Students watching a presentation at a DAI Doctoral Colloquium
Project TERAIS

Doctoral colloquia are a platform for PhD students at DAI to present their research a wider departmental audience, exchange ideas, and foster friendships. They are organized on a weekly basis during the semester by assoc. prof. Damas Gruska. They are among the activities within the TERAIS project, aimed at elevating DAI to a workplace of international academic excellence.

When
weekly on Mondays at 13:10 (during the teaching part of the semester)
Where
I-9 and online in MS Teams

Recap of the first semester of Doctoral Colloquia (Summer 2023)

Upcoming presentations

Daniel Kyselica: Processing of light curves of satellites and space debris for the purpose of their identification

Daniel Kyselica's photo

With the increase in space traffic in recent years, precise monitoring of space debris is necessary. Observations in the form of light curves provide us with information about an object’s physical properties including shape, size, surface materials and rotation. Publicly available databases contain a huge amount of gathered light curves that can be used to train machine learning models. Pieces of space debris can fall down to the Earth in a process called reentry, therefore 3D reconstruction of this event can bring more understanding of the physical processes.

Plan for this semester

Presentations Plan
PhD Student Date
Peter Anthony 26 Feb
František Dráček 4 Mar
Daniel Kyselica 11 Mar
Filip Kerák 18 Mar
Janka Boborová 25 Mar
Marek Šuppa 8 Apr
Radovan Gregor 15 Apr
Fatana Jafari 22 Apr
Elena Štefancová 29 Apr
Pavol Kollár 6 May
Ján Pastorek 13 May

Past presentations

Summer semester 2023/24

František Dráček: Anomaly detection from TLE data

František Dráček's photo

The burgeoning number of objects in Low Earth Orbit (LEO) poses a critical challenge for satellite operators, demanding robust collision avoidance measures. Although extensive databases track these objects, identifying anomalies within this data is crucial. This research investigates the application of machine learning methods to automatically detect anomalies in satellite data, potentially enhancing space situational awareness and safeguarding future space operations.

Peter Anthony: Tailoring Logic Explained Network for a Robust Explainable Malware Detection

Peter Anthony presenting a DAI Doctoral Colloquium

The field of malware research faces persistent challenges in adopting machine learning solutions due to issues of low generalization and a lack of explainability. While deep learning, particularly artificial neural networks, has shown promise in addressing the generalization problem, their inherent black-box nature poses challenges in providing explicit explanations for predictions. On the other hand, interpretable machine learning models, such as linear regression and decision trees, prioritize transparency but often sacrifice performance. In this work, to address the imperative needs of robustness and explainability in cybersecurity, we propose the application of a recently proposed interpretable-by-design neural network - Logic Explain Network (LEN) to the complex landscape of malware detection. We investigated the effectiveness of LEN in discriminating malware and providing meaningful explanations and evaluate the quality of the explanations over increasing feature size based on fidelity and other standard metrics. Additionally we introduce an improvement on the simplification approach for the global explanation. Our analysis were carried out using static malware features provided by the EMBER dataset. The experimental results shows LEN’s discriminating performance is competitive with Blackbox deep learning models. LEN's Explanations demonstrated high fidelity, indicating genuine reflections of the model's inner workings. However, a notable trade-off between explanation fidelity and compactness is identified.

Winter semester 2023/24

Pócoš Štefan: RecViT: Enhancing Vision Transformer with Top-Down Information Flow

We propose and analyse a novel neural network architecture — recurrent vision transformer (RecViT). Building upon the popular vision transformer (ViT), we add a biologically inspired top-down connection, letting the network ‘reconsider’ its initial prediction. Moreover, using a recurrent connection creates space for feeding multiple similar, yet slightly modified or augmented inputs into the network, in a single forward pass. As it has been shown that a top-down connection can increase accuracy in case of convolutional networks, we analyse our architecture, combined with multiple training strategies, in the adversarial examples (AEs) scenario. Our results show that some versions of RecViT may indeed be more robust than the baseline ViT. We also leverage the fact that transformer networks have certain level of inherent explainability. By visualising attention maps of various input images, we gain some insight into the inner workings of our network.

Dominika Mihalova: Constructions of Hypergraphical Regular Representations

Our research is inspired by the question: Can an abstract algebraic structure, like a group, be faithfully represented by a graph? This question has been studied extensively. We extend this to a more general construction - hypergraph. In our previous research, we characterized all groups of orders less than 33 which admit hypergraphical regular representation via k-uniform hypergraph. Our latest research is focused on the hypergraphical regular representation of groups with orders greater than 32, where we combined our previous method with the method of Erskine and Tuite.

Kubík Jozef: Efficient fine-tuning of Bert models in Slovak language

he popularity of creating large language models has been incredibly rising. Most modern LLMs offer great accuracy in many different text-based tasks but are limited by a huge number of data required to not only pre-train but also fine-tune these models. This problem only deepens in models trained on data from low and mid-resource languages, such as Slovak. In our work, we examine the fine-tuning process of two such models and try to enhance it by connecting to Epinet network to create Epistemic neural network, a relatively new concept that helps the model to detect its own uncertainty to make better decisions in the long run. In the presentation, we will show not only results based on the classic fine-tuning of such novel network but also try to use methods of Active learning to lower the data requirements while retaining similar results.

Lukáš Gajdošech: Data Availability from Industrial Hardware in the Era of Consumer-grade Sensor Abundance

Algorithms based on deep learning have increasingly become an integral part of our daily lives. Essential ingredient of these methods, which is often taken for granted, is the availability of large-scale datasets with real-world complexity coverage. In the context of vision tasks, these usually consist of images available online. These samples are often produced by the general public without the prior goal of dataset creation. This is in sharp contrast with the situation in the industry. Tasks like machine part localization by industry-grade robots are still built upon traditional deterministic algorithms. While this may be surprising at the first sight, it makes perfect sense from the perspectives of data availability, reproducibility, robustness and trustworthiness. Not only the factory scenes are vastly different, but also the sensors employed such as 3D cameras, yield data in different format and fidelity than their consumer-grade counterparts. Suddenly, gathering of datasets for deep learning requires unrealistic amounts of time and manual labor in this setting. One solution discussed in this presentation will be the usage of so-called digital twins, resulting in synthetic training data. Challenges of domain gap will be consulted in the context of authors current research on surface-normal prediction of depth maps with missing samples. Furthermore, an analysis of two recently published works of the author from this data importance perspective will follow. Lastly, problems of static datasets and an ever-changing world affecting these solutions will be touched upon with modern approaches such as lifelong and multi-task learning.

Endre Hamerlik: Exploring Morphosyntactic Insights in Masked Language Models through Probing and Perturbations

Masked language models (MLMs) have gained significant popularity in the field of natural language processing due to their impressive performance across various tasks. To better understand how MLMs work internally, probing techniques are widely used. These techniques involve training classifiers to predict specific language features based on the hidden representations produced by MLMs.

In our research, we've chosen to focus on the morphosyntactic features found in input texts. Our main question is whether Large Language Models (LLMs), trained solely on MLM tasks, develop representations that capture aspects like part-of-speech and morphological categories.

To shed light on the dependencies of these representations, we've introduced controlled conditions where we provide modified inputs to the pretrained language model being studied. The quality and impact of these modifications have led to intriguing findings, which we'll be presenting at the upcoming PhD colloquium scheduled for November 6, 2023.

Dana Škorvánková: Towards Automatic Human Measurements and Pose Extraction

The estimation of human body pose and its measurements is an emerging problem that drives attention in many research areas. An automatic and accurate approach to address the problem is crucial in many fields of computer vision-oriented industry. Our research targets multiple human body analysis-related tasks, including pose estimation, pose tracking, and anthropometric body measurements estimation. Similarly to other research fields, deep learning methods proved to outperform analytical strategies. We also examine various types of visual input data, including three-dimensional point clouds. Since obtaining a large-scale database of real annotated training data is time-consuming and non-effective, we propose to substitute or augment the training process with synthetically generated human body data. We will report main challenges within each of the stated tasks, along with our proposed approaches to address them, and include the already published parts of our research.

Matej Fandl: Retrieving memories from meta-stable states of continuous modern Hopfield networks

Continuous modern Hopfield networks are a model of associative memory with large storage capacity. They can be used both as a component in deep learning architectures and as a standalone component for pattern completion and denoisifying. Our work is aimed at improving the training method of these methods by using competition to achieve distributed representations of training patterns and their effective storage in meta-stable states.

Bečková Iveta: Towards legible motion of simulated NICO

In human-robot interaction, we expect the robot to act in such a way as to ensure smooth cooperation. One desirable property of a robot's motion is legibility. Motion is defined as legible if an observer is able to correctly predict the motion's goal, while doing so as soon as possible. However, training the robot to be legible with human participants is not feasible, therefore in our experiments with a humanoid robot NICO, we chose to use a neural network observer to predict goals. These predictions are then used to construct part of the reward signal for the acting agent. Training is done with a simulated model of NICO in Unity.

Andrej Baláž: Pangenomic data structures

Recent advancements in sequencing technologies brought a steep decrease in the acquisition price of biological sequences and a rapid increase in the growth of the size of novel genomic datasets. This growth and the shifting paradigm of jointly analyzing all the related sequences, also called pangenomics, demand new data structures and algorithms for efficient processing. We will present several data structures that form the building blocks of fundamental bioinformatics algorithms, such as read alignment. Due to the immense sizes of the pangenomic datasets, these algorithms often have to work in compressed space while remaining time-efficient to be practical. Therefore, we will show several techniques to achieve this efficiency and highlight our contributions to the field of pangenomic data structures.

Summer semester 2022/23

Lukáš Radoský: Optimization and Reuse in Development of Large Software Systems

Growing requirements and the complexity of software systems involve a sophisticated and creative process of analysis and design with the intensive cooperation of many experts with various specializations, who are informed about the real state and problems at the last moment in the time of analysis and development process. Therefore, one of the main motivations for this research is to analyze and design diverse appropriate methods in the following areas of research: 1. collaborative and parallel modeling and development through the common and shared software models for increasing productivity and work efficiency, 2. visualization of parallel layers in multidimensional space with particular modules, use cases, versions, or alternative and parallel flow of scenarios to reduce vague and redundant elements, to achieve a lean and optimal architecture, 3. progressive and advanced methods of teaching programming in a graphical environment, in virtual and augmented reality, 4. refactoring and reusing knowledge in models and source code, 5. fusion of models, visualization of functionality, patterns and use case scenarios in software architectures, 6. multidimensional visualization of source code structure in virtual and augmented reality (VR and AR); topics and sources of knowledge; evolution and quality (identification of patterns and bad smells); authors and users; interconnections with the models to reduce cognitive load and complexity of large UML models using layers, decomposing the system for review and deeper understanding, which can lead to more effective implementations.

Anthony Peter: An Improved Classifier for Learning and Discriminating Malware Using Knowledge Base Embedding

Malware detection is a critical task in cybersecurity, and traditional signature-based approaches are often ineffective against new and evolving threats. Recent research has shown that machine learning models can improve the accuracy of malware classification. However, existing methods often suffer from poor generalization performance and lack of explainability making it difficult to understand how they arrived at their predictions. This can make it challenging for cybersecurity experts to assess the reliability of the model and identify false positives or false negatives. In this work, we aim at a novel approach that combines a graph-based representation of malware with a neural network classifier. Entities and relationships in a knowledge graph are projected into a low-dimensional space. The approach involves learning a vector representation for each entity and relationship in the knowledge base while preserving their semantic meaning, so as to accurately discriminate between malicious and benign software. Additionally, the resulting embeddings can be used to derive explanations for the predictions, giving cybersecurity experts insights into malware behavior and decision-making processes. Overall the goal is an approach that will present a valuable tool for malware detection and analysis in real-world settings, with accurate predictions and meaningful explanations.

Dráček František: Anomaly detection from TLE data

The rapidly growing number of objects in the Low Earth Orbit presents a significant challenge for satellite operators to ensure that satellites do not collide with other objects, which could lead to the loss of the satellite, or worse, it could cause the shattering of the satellites.

A significant research effort is devoted to assessment of close conjunction close conjunction risks. Most such approaches rely on Two line element data (TLE). TLEs contains orbital elements of space object at a specific time. Aside from estimating collision risks, TLEs are used to calculate re-entry, used for drag make-up and space weather estimation.

Our aim is to identify and study anomalies in the TLE data. The anomalies are in some cases related to the errors in the measurement, but also errors caused by their osculating(averaged) character.

Gajdošech Lukáš: Evolution of Fusion on High-Quality Depth Data

The availability of 3D sensors acquiring depth data caused the rise of interest in the problem of registering depth maps from a sequence with unknown transformation into a common space. Usually, surface reconstruction of the growing point cloud is performed on the fly. This procedure is known as a Fusion, and it has been a popular topic since 2011 with the release of the KinectFusion paper. A similar task in robotics, when only 2D images are available, is called Structure from Motion. Nowadays, both RGB cameras and depth sensors are available in much higher resolution, with better exposure control and higher framerates. In this presentation, we will do a chronological overview of the papers regarding the Fusion problem and available datasets. Then, we will present data obtained using a novel structured light based-sensor resulting in high 1120x800 resolution depth maps obtained in real-time. This sensor is used primarly in industrial settings. Some sequences are designed to be hard, with rapid movement between the frames, circular motion, and symmetrical objects, where the transformation calculation using only geometry data is ill-defined. On this data, we will compare the results of traditional pipelines with detectors like AKAZE for texture features versus a hybrid, where a neural network is used for obtaining texture correspondences.

Mihálová Dominika: Optimal structures based on algebraic constructions

The use of computers to solve problems in the field of mathematics has become more important with the growing complexity of the considered problems. In our work, we focus on two different problems: the Cage problem from the area of Extremal Graph Theory and the Regular representation problem from the area of Algebraic Graph Theory. We present the Cage problem and computational techniques of how to approach it for given specific parameters. We describe our computational approach to the Regular representation problem of k-uniform hypergraphs for groups of order smaller than 33 with our published results.

Jozef Kubík: Active Learning in Large Language Models

In recent years the popularity of creating large language models has been incredibly rising. Most modern LLMs based on Transformers architecture offer great accuracy in many different text-based tasks but are often limited in some areas. For many low or mid-resource languages (such as Slovak), one of the biggest limitations is the amount of annotated data needed for fine-tuning such big model. Our work aims to highlight this problem with a BERT-line of models and suggest a promising method of reducing data for low-resource languages based on the recent developments in the area of Active learning thanks to the novel concept of Epistemic neural networks.

Kyselica Daniel: Processing of light curves of satellites and space debris for the purpose of their identification

With the increase in space traffic in recent years, precise monitoring of space debris is necessary. Observations in form of light curves provide us with information about an object’s physical properties including shape, size, surface materials and rotation. Publicly available databases contain a huge amount of gathered light curves that can be used to train machine learning models. Pieces of space debris can fall down to the Earth in a process called reentry, therefore 3D reconstruction of this event can bring more understanding of the physical processes.

Andrej Baláž : Compressed self-indexes for pangenomic datasets

Recent advancements in sequencing technologies brought a steep decrease in the acquisition price and a rapid increase in the growth of the size of novel genomic datasets. This growth and the shifting paradigm of jointly analysing all the related sequences, also called pangenomics, demand new data structures and algorithms for efficient processing. We will present several data structures, also called self-indexes, which form the basic building blocks of fundamental bioinformatics algorithms, such as read alignment. Due to the immense sizes of the pangenomic datasets, these self-indexes have to be compressed while remaining time-efficient to be practical. Therefore, we will show two compression techniques, tunnelling and r-indexing, and highlight our contributions to the compressed self-indexes in the form of a space-efficient construction algorithm and pattern-matching algorithm.

Fandl Matej: Attractor models of associative memory - on learning in modern Hopfield networks

Neural networks exhibiting point attractor dynamics are useful for modeling associative memory. A well known example is the Hopfield network, the modern variants of which got into the spotlight recently due to their huge storage capacity and usability in deep learning architectures. Our work builds on the interpretation of these networks as networks with 1 hidden layer of feature detectors. We see space for improvement in terms of training modern Hopfield networks, since the currently known methods either sacrifice training time for the computational complexity of the model, or the other way around. Our talk will describe our attempts at designing a novel learning rule for these networks, the use of which is expected to lead to fast and efficient distribution of labor between the hidden units.

Dana Škorvánková: Automatic 3D Human Pose Estimation, Skeleton Tracking and Body Measurements

The estimation of human body pose and its measurements is an emerging problem that drives attention in many research areas. An automatic and accurate approach to address the problem is crucial in many fields of computer vision-oriented industry. We target multiple human body analysis-related tasks, including pose estimation, pose tracking, and anthropometric body measurements estimation. Similarly to other research fields, deep learning methods proved to outperform analytical strategies. We also examine various types of visual input data, including three-dimensional point clouds. Since obtaining a large-scale database of real annotated training data is time-consuming and non-effective, we propose to substitute or augment the training process with synthetically generated human body data. We will report preliminary results of our experiments within each of the stated tasks, along with the already published parts of our research.

Iveta Bečková: Adversarial Examples in Deep Learning

Deep neural networks achieve remarkable performance in multiple fields. However, after proper training they suffer from an inherent vulnerability against adversarial examples (AEs). The AEs attempt to find the worst-case perturbation in input space, resulting in faulty output (such as misclassification). Different methods of attacks provide different approximations of this worst-case and each of them has certain advantages and disadvantages.The problem gets even more complicated in the deep RL setting, where time is also a factor. We will present our work on comparing different adversarial attacks, as well as plans for future research in adversarial attacks on deep RL agents.

Štefan Pócoš: Explainability and Interpretability of Deep Neural Network

It is well known that artificial neural networks (ANNs) have been shown to achieve outstanding accuracy in plenty of tasks. Although in some cases their performance is getting to its peaks, it is not the only aspect researchers are concerned about. One of the downsides of modern ANNs is their lack of explainability and interpretability. This is easily demonstrated by fooling them using adversarial examples. We examined this aspect of ANNs by applying several methods to fool them and visualized their inner behavior when fooled. We will also talk about our plan for future research, including the usage of modern networks based on attention.

Endre Hamerlik: Morphology Spreads Progressively: Evidence from Probing BERTs

Large language models (LLMs) have become increasingly popular in natural language processing due to their impressive performance on a range of tasks. To better understand the internal representations of LLMs, probing techniques are being used, which involve training diagnostic classifiers to predict specific linguistic features based on the LLM's hidden representations.

In this study, we investigate multilingual LLMs' (mBERT's and XLM-RoBERTa's) internal representations using probing techniques and explore the effect of input perturbations on these representations. We also introduce new controls and ablations to evaluate the impact of these perturbations on the diagnostic classifiers' performance. We utilize Shapley values, a model-agnostic approach, to identify the most influential tokens in the input that affect the LLM's internal representations.

Our results indicate that the diagnostic classifiers are highly sensitive to input perturbations, imlpying that LLMs' representations are highly dependent on specific linguistic features like morphology. The analysis of Shapley values provides insight into which input tokens have the greatest impact on the LLM's representations of morphological features. One of the most intriguing findings that emerges is a strong tendency for the preceding context to hold more morphosyntactic information relevant to the prediction than the following context.

Juraj Vincur: Software development in virtual and augmented reality

Software engineers primarily interact with source code using a keyboard and a mouse, and typically view it in a „bento box“ IDE displayed on a small number of 2D monitors. We believe that this traditional approach ignores the potential of newly emerged VR and AR technologies.

The main goal of our research is to design and implement various integrations of VR and AR technologies in the software development process and evaluate their potential in the given field. In the upcoming seminar we will present our prototypes and discuss the results.

Dana Škorvánková: Automatic 3D Human Pose Estimation, Skeleton Tracking and Body Measurements

The estimation of human body pose and its measurements is an emerging problem that drives attention in many research areas. An automatic and accurate approach to address the problem is crucial in many fields of computer vision-oriented industry. We target multiple human body analysis-related tasks, including pose estimation, pose tracking, and anthropometric body measurements estimation. Similarly to other research fields, deep learning methods proved to outperform analytical strategies. We also examine various types of visual input data, including three-dimensional point clouds. Since obtaining a large-scale database of real annotated training data is time-consuming and non-effective, we propose to substitute or augment the training process with synthetically generated human body data. We will report preliminary results of our experiments within each of the stated tasks, along with the already published parts of our research.