Skip to content

KTH Ceremony on November 17, 2023

We were once again at the ceremony at which KTH is officially awarding doctoral degrees. This time it was Alireza Farshin’s time, and we used the beautiful Stockholm City Hall to recreate our favorite hallway shot!

Dejan is congratulating Alireza on his officially received PhD degree from KTH. Image taken by Ana Kostic

 

SEMLA: New Vinnova-funded project on LLMs for cybersecurity

Our “SEMLA: Securing Enterprises via Machine-Learning-based Automation” project proposal has been selected for funding by Vinnova. The project cost is 12MSEK with Prof. Marco Chiesa as the PI. Other project partners include members from the Computer Security group from KTH,  the Connected Intelligence unit at RISE, RedHat, and Saab. 

The SEMLA project seeks to make the development of software systems more resilient, secure, and cost-effective. SEMLA leverages recent advancements in machine learning (ML) and artificial intelligence (AI) to automate critical yet common & time-consuming tasks in software development that often lead to catastrophic security vulnerabilities.

Switcharoo accepted at CoNEXT 2023

Today’s network functions require keeping state at the granularity of each individual flow. Storing such state on network devices is highly challenging due to the complexity of the involved data structures. As such, the state is often stored on inefficient CPU-based servers as opposed to high-speed ASIC network switches. In our newly accepted CoNEXT paper, we demonstrate the possibility to perform tens of millions of low-latency flow state insertions on ASIC switches, showing our implementation achieves 75x memory requirements compared to existing probabilistic data structures in a common datacenter scenario. A PDF of the paper will soon be available. This was joint work between Mariano Scazzariello, Tommaso Caiazzi (from Roma Tre University), and Marco Chiesa.

Daniel’s presentation at IPSN ’23 “DeepGANTT: A Scalable Deep Learning Scheduler for Backscatter Networks”

At ACM IPSN ’23 Daniel presented our work on DeepGANTT, a scheduler which demonstrates our ability to apply transformers to graph neural networks for scaling up an IoT scheduling problem 6X-11X beyond what a constraint optimization solver can solve in a reasonable time. Full abstract is below.

This is joint work with

Daniel F. Perez-Ramirez, Carlos Pérez-Penichet, Nicolas Tsiftes (RISE), Thiemo Voigt (Uppsala University and RISE), Dejan Kostić, and Magnus Boman (KTH).

Novel backscatter communication techniques enable battery-free sensor tags to interoperate with unmodified standard IoT devices, extending a sensor network’s capabilities in a scalable manner. Without requiring additional dedicated infrastructure, the battery-free tags harvest energy from the environment, while the IoT devices provide them with the unmodulated carrier they need to communicate. A schedule coordinates the provision of carriers for the communications of battery-free devices with IoT nodes. Optimal carrier scheduling is an NP-hard problem that limits the scalability of network deployments. Thus, existing solutions waste energy and other valuable resources by scheduling the carriers suboptimally. We present DeepGANTT, a deep learning scheduler that leverages graph neural networks to efficiently provide near-optimal carrier scheduling. We train our scheduler with optimal schedules of relatively small networks obtained from a constraint optimization solver, achieving a performance within 3% of the optimum. Without the need to retrain, our scheduler generalizes to networks 6 × larger in the number of nodes and 10 × larger in the number of tags than those used for training. DeepGANTT breaks the scalability limitations of the optimal scheduler and reduces carrier utilization by up to compared to the state-of-the-art heuristic. As a consequence, our scheduler efficiently reduces energy and spectrum utilization in backscatter networks.

Our recent IOMMU PeerJ CS article

Can networking applications achieve suitable performance with IOMMU at high rates? Our recent PeerJ CS article answers this question by characterizing the performance implications of IOMMU and its cache (IOTLB) on recent Intel Xeon Scalable & AMD EPYC processors at 200 Gbps. Our study shows that enabling IOMMU at high rates could result in an up-to-20-percent throughput drop due to excessive IOTLB misses. Moreover, we present potential mitigation techniques to recover the introduced throughput drop caused by the “IOTLB wall” by using hugepage-backed buffers in the Linux kernel. This is joint work with Alireza Farshin (KTH), Luigi Rizzo (Google), Khaled Elmeleegy (Google), and Dejan Kostic (KTH). Follow the links for PDF and code.”