Author
Last week, the HOL4P4 formalisation of the P4 language was presented at the OOPSLA conference. HOL4P4 is a model of the P4 language written in the interactive theorem prover HOL4, which differs from existing work by its efficient and judicious scoping mechanism and fine-grained, incremental semantic style. These design choices make it easier to construct analysis tools and model concurrent phenomena using HOL4P4. The HOL4P4 formal semantics has since been used to create a practical theorem-grade verification tool for P4, which has already been presented at VSTTE.
GitHub code: link
Paper PDF: link
Authors: Anoud Alshnakat, Didrik Lundberg, Roberto Guanciale, Mads Dam
Author
In a recent paper, we show how to formally verify P4 code using the interactive theorem prover (ITP) HOL4. The method creates an end-to-end proof entirely inside the logic of the ITP, and is applicable to real P4 programs by using the import tool of HOL4P4. To gain efficiency when symbolically executing large programs, multiple types of semantics are used, with proofs of soundness with respect to the standard semantics enabling the composition of their results.
This tool can be used by SEMLA in the future to formally verify that P4 code generated by LLMs respects certain program specifications.
Authors: Didrik Lundberg, Roberto Guanciale, Mads Dam.
GitHub code: link
Paper PDF (pre-conference version): link
Author
Marco Chiesa presented the recent results from the CoNEXT paper at NVidia Research.
Author
Can Large Language Models facilitate network configuration? In our recently accepted CoNEXT 2024 paper, we investigate the opportunities and challenges in operating network systems using recent LLM models.
We devise a benchmark for evaluating the capabilities of different LLM models on a variety of networking tasks and show different ways of integrating such models within existing systems. Our results show that different models works better in different tasks. Translating high-level human-language requirements into formal specifications (e.g., API function calling) can be done with small models. However, generating code that controls network systems is only doable with larger LLMs, such as GPT4.
This is a first fundamental first step in our SEMLA project looking at ways to integrate LLMs into system development.
GitHub code: link
Hugging Face: link
Paper PDF: link
Author
Two exceptional students have been hired through a selective process within the SEMLA process: Laura Puccioni and Daniele Cipollone. Laura is working on understanding how to compress models for code generation while Daniele is focusing on detecting vulnerabilities using LLMs. We look forward to seeing the outcome of the two theses!