I am a first-year PhD student in Computer Science at GLADIA, Sapienza University of Rome. Previously, I was a visiting graduate student researcher at UC San Diego.
My research focuses on Foundation model post-training, model merging, LLM evaluation, and agentic AI systems. I aim at building methods and tools that make large models more adaptable, composable, and usable.
Outside the lab, I stay active through sports π₯π€ΈββοΈ and explore my creativity through photography π· and music πΉπΈ. Iβm always happy to collaborate, exchange ideas, or just have a chat. Feel free to reach out!
[04-2026] π CaTS-Bench, our benchmark for time series captioning and reasoning, has been accepted to ACL 2026 Findings!
[01-2026] π Wondering what determines model mergeability? Check out our newest preprint βDemystifying Mergeability: Interpretable Properties to Predict Model Merging Successβ!
[07-2025] π Iβve graduated with an MSc Degree from Sapienza University of Rome with honors! Check out my thesis here.
[07-2025] π Our paper βATM: Improving Model Merging by Alternating Tuning and Mergingβ has been accepted for presentation at the Breaking the Monolith: 1st ICIAP Workshop on Advances in Modular Deep Learning!
[06-2025] π Iβve started a research internship at Panasonic North America on LLM code generation, collaborating with Stanford University and ItalAI.
[03-2025] π Iβm visiting UC San Diego as a visiting student researcher, working on a Multimodal Benchmark on Time Series Captioning & Reasoning alongside Rose STL Lab.