I am an incoming PhD candidate in Computer Science at GLADIA, Sapienza University of Rome. My research interests lie in deep learning, with a particular focus on overcoming scalability challenges that limit its real-world applicability and addressing the current shortcomings of LLMs. Previously, I was an exchange student at the Chinese University of Hong Kong (Shenzhen) and a visiting graduate researcher at UC San Diego.
I am always open to collaborations, discussions, and chatting. Feel free to connect!
[08-2025] Our new paper “On Task Vectors and Gradients” explores the link between task vectors and gradients, and is available as a preprint here!
[07-2025] I’ve graduated from Sapienza University of Rome with honors! Check out my thesis here.
[07-2025] Our paper “ATM: Improving Model Merging by Alternating Tuning and Merging” has been accepted for presentation at the Breaking the Monolith: 1st ICIAP Workshop on Advances in Modular Deep Learning!
[06-2025] I’ve started a hybrid research internship at Panasonic North America on LLM code generation, collaborating with Stanford University and ItalAI.
[03-2025] I’m visiting the University of California San Diego as a visiting student researcher, working on a Multimodal Benchmark on Time Series Captioning & Understanding for VLMs, under the guidance of Prof. Rose Yu.
[11-2024] Our paper on multitask learning is available as a preprint here!