Skip to content

Giacomo Verardo’s presentation at ICPRAI 2024: “FMM-Head: Enhancing Autoencoder-based ECG anomaly detection with prior knowledge”

We are happy to announce that at ICPRAI 2024 Giacomo presented our paper titled “FMM-Head: Enhancing Autoencoder-based ECG anomaly detection with prior knowledge”. This work shows the benefit of taking the underlying model of the heart (via Frequency Modulated Möbius waves) into account when performing ECG anomaly detection. Our model achieves up to 0.31 increase in area under the ROC curve (AUROC) when compared to the state of the art models. Moreover, the processing time of our model is four orders of magnitude lower than solving an optimization problem to obtain the same parameters, thus making it suitable for real-time ECG parameter extraction and anomaly detection.

This is joint work with Giacomo Verardo (KTH), Magnus Boman (KI), Samuel Bruchfeld (KI), Marco Chiesa (KTH), Sabine Koch (KI), Gerald Q. Maguire Jr. (KTH), and Dejan Kostic (KTH).

The paper received an Honorable Mention in the competition for the Best Paper Award, and is available at this link, while the full abstract is below:

Detecting anomalies in electrocardiogram data is crucial to identify deviations from normal heartbeat patterns and provide timely intervention to at-risk patients. Various AutoEncoder models (AE) have been proposed to tackle the anomaly detection task with ML. However, these models do not explicitly consider the specific patterns of ECG leads, thus compromising learning efficiency. In contrast, we replace the decoding part of the AE with a reconstruction head (namely, FMM-Head) based on prior knowledge of the ECG shape. Our model consistently achieves higher anomaly detection capabilities than state-of-the-art models, up to 0.31 increase in area under the ROC curve (AUROC), with as little as half the original model size and explainable extracted features. The processing time of our model is four orders of magnitude lower than solving an optimization problem to obtain the same parameters, thus making it suitable for real-time ECG parameters extraction and anomaly detection. The code is available at: https://github.com/giacomoverardo/FMM-Head.

Giacomo Verardo’s Licentiate Defense

We are happy to announce that Giacomo Verardo successfully defended his licentiate thesis (licentiate is a degree at KTH half-way to a PhD)! As usual, Marco Chiesa has done an excellent job as a co-advisor, and we are truly grateful to Prof. Gerald Q. Maguire Jr. for his stellar insights. Dr. Maxime Sermesant was a superb opponent at the licentiate seminar, with Prof. Vlad Vlassov as the examiner. Giacomo ’s thesis is available online:

 

“Optimizing Neural Network Models for Healthcare and Federated Learning”

 

A few images from the defense and the celebration are below.

Giacomo presenting during the defense (image taken by Massimo Girondi).

Dejan congratulates Giacomo once Prof. Vlassov announced that Giacomo passed his defense (image taken by Massimo Girondi).

Dejan hands the traditional gift to Giacomo (image taken by Voravit Tanyingyong).

Group image with Networked Systems Laboratory members (image taken by Sanna Jarl).

Our first paper presentation: “Fast Server Learning Rate Tuning for Coded Federated Dropout”

We are excited to announce that in June 2022, Giacomo Verardo presented our  first paper (in this project) titled “Fast Server Learning Rate Tuning for Coded Federated Dropout“ at the International Workshop on Trustworthy Federated Learning in Conjunction with IJCAI 2022 (FL-IJCAI’22).

This is joint work with Giacomo Verardo, Daniel Barreira, Marco Chiesa, Dejan Kostic, and Gerald Quentin Maguire Jr.. Full abstract is below:

In cross-device Federated Learning (FL), clients with low computational power train a common machine model by exchanging parameters via updates instead of potentially private data. Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session by selecting a subset of model parameters to be updated in each training round. However, compared to standard FL, FD produces considerably lower accuracy and faces a longer convergence time. In this paper, we leverage coding theory to enhance FD by allowing different sub-models to be used at each client. We also show that by carefully tuning the server learning rate hyper-parameter, we can achieve higher training speed while also achieving up to the same final accuracy as the no dropout case. For the EMNIST dataset, our mechanism achieves 99.6% of the final accuracy of the no dropout case while requiring 2.43x less bandwidth to achieve this level of accuracy.