About me
I am a final-year Ph.D. candidate at the Swiss Federal Institute of Technology in Lausanne (EPFL), in the EDIC doctoral program. I am co-supervised by Prof. Martin Jaggi and Prof. François Fleuret in the laboratory of Machine Learning and Optimization (MLO). My research currently focuses on understanding and democratizing Large Language Models (LLMs), as well as improving the robustness and uncertainty of deep models under distribution shifts. I am currently interning at Apple ML Research (MLR), supervised by David Grangier, in the team of Ronan Collobert. Previously, I had the chance to visit the laboratory of Michael Jordan at the University of California at Berkeley, working on minmax optimization and domain adaptation.
Working Papers
-
The AdEMAMix Optimizer: Better, Faster, Older
M. Pagliardini, P. Ablin, D. Grangier
under review • Paper • Tweet • BibTex@misc{pagliardini2024ademamix, title={The AdEMAMix Optimizer: Better, Faster, Older}, author={Matteo Pagliardini and Pierre Ablin and David Grangier}, year={2024}, eprint={2409.03137}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2409.03137}, }
-
CoTFormer: A Chain-of-Thought Driven Architecture with Budget-Adaptive Computation Cost at Inference
A. Mohtashami*, M. Pagliardini*, M. Jaggi
under review • Paper • BibTex@article{DBLP:journals/corr/abs-2402-02622, author = {Amirkeivan Mohtashami and Matteo Pagliardini and Martin Jaggi}, title = {CoTFormer: A Chain-of-Thought Driven Architecture with Budget-Adaptive Computation Cost at Inference}, journal = {CoRR}, volume = {abs/2402.02622}, year = {2024} }
Peer-reviewed Published Papers
-
DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging
M. Pagliardini*, A. Mohtashami*, F. Fleuret, M. Jaggi
NeurIPS 2024 • Paper • Code • Tweet • BibTex@inproceedings{denseformer24, author = {Matteo Pagliardini and Amirkeivan Mohtashami and Fran{\c{c}}ois Fleuret and Martin Jaggi}, title = {DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging}, booktitle = {NeurIPS}, year = {2024} }
-
DoGE: Domain Reweighting with Generalization Estimation
S. Fan, M. Pagliardini, M. Jaggi
ICML 2024 • Paper • Code • BibTex@article{fan2024doge, author = {Simin Fan and Matteo Pagliardini and Martin Jaggi}, title = {DoGE: Domain Reweighting with Generalization Estimation}, journal = {ICML}, year = {2024} }
-
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
Z. Chen, A.H. Cano, A. Romanou, A. Bonnet, K. Matoba, F. Salvi, M. Pagliardini, S. Fan, A. Köpf, A. Mohtashami, A. Sallinen, A. Sakhaeirad, V. Swamy, I. Krawczuk, D. Bayazit, A. Marmet, S. Montariol, MA. Hartley, M. Jaggi, A. Bosselut
arxiv 2023 • Paper • Code • BibTex@misc{chen2023meditron70b, title={MEDITRON-70B: Scaling Medical Pretraining for Large Language Models}, author={Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, year={2023}, eprint={2311.16079}, archivePrefix={arXiv}, primaryClass={cs.CL} }
-
Fast Attention Over Long Sequences With Dynamic Sparse Flash Attention
M. Pagliardini*, D. Paliotta*, M. Jaggi, F. Fleuret
NeurIPS 2023 • Paper • Code • Tweet • BibTex@inproceedings{PagliardiniPJF23, author = {Matteo Pagliardini and Daniele Paliotta and Martin Jaggi and Fran{\c{c}}ois Fleuret}, title = {Fast Attention Over Long Sequences With Dynamic Sparse Flash Attention}, booktitle = {NeurIPS}, year = {2023} }
-
A Primal-dual Approach for Solving Variational Inequalities with General-form Constraints
T. Chavdarova*, T. Yang*, M. Pagliardini, M. I. Jordan
ICLR 2024 • Paper • Code • BibTex@inproceedings{chavdarova2024acvi, title = {A Primal-dual Approach for Solving Variational Inequalities with General-form Constraints}, author = {Chavdarova, Tatjana and Yang, Tong and Pagliardini, Matteo and Jordan, Michael I.}, booktitle = {ICLR}, year = {2024}, }
-
Agree to Disagree: Diversity through Disagreement for Better Transferability
Matteo Pagliardini, Martin Jaggi, François Fleuret, Sai Praneeth Karimireddy
ICLR 2023 (Oral, Notable top 5%) • Paper • Code • BibTex@inproceedings{PagliardiniJFK23, author = {Matteo Pagliardini and Martin Jaggi and Fran{\c{c}}ois Fleuret and Sai Praneeth Karimireddy}, title = {Agree to Disagree: Diversity through Disagreement for Better Transferability}, booktitle = {{ICLR}}, publisher = {OpenReview.net}, year = {2023} }
-
Taming GANs with Lookahead-Minmax
Tatjana Chavdarova*, Matteo Pagliardini*, Sebastian U Stich, Martin Jaggi, François Fleuret
ICLR 2021 • Paper • Code • BibTex@inproceedings{ChavdarovaPSFJ21, author = {Tatjana Chavdarova and Matteo Pagliardini and Sebastian U. Stich and Fran{\c{c}}ois Fleuret and Martin Jaggi}, title = {Taming GANs with Lookahead-Minmax}, booktitle = {{ICLR}}, publisher = {OpenReview.net}, year = {2021} }
-
Better Word Embeddings by Disentangling Contextual N-gram Information
Prakhar Gupta*, Matteo Pagliardini*, Martin Jaggi
NAACL 2019 • Paper • Code • BibTex@inproceedings{GuptaPJ19, author = {Prakhar Gupta and Matteo Pagliardini and Martin Jaggi}, title = {Better Word Embeddings by Disentangling Contextual n-Gram Information}, booktitle = {{NAACL-HLT} {(1)}}, pages = {933--939}, publisher = {Association for Computational Linguistics}, year = {2019} }
-
Unsupervised Learning of Sentence Embeddings Using Compositional N-gram Features
Matteo Pagliardini*, Prakhar Gupta*, Martin Jaggi
NAACL 2018 • Paper • Code • BibTex@inproceedings{PagliardiniGJ18, author = {Matteo Pagliardini and Prakhar Gupta and Martin Jaggi}, title = {Unsupervised Learning of Sentence Embeddings Using Compositional n-Gram Features}, booktitle = {{NAACL-HLT}}, pages = {528--540}, publisher = {Association for Computational Linguistics}, year = {2018} }
Workshop Papers
-
CoTFormer: More Tokens With Attention Make Up For Less Depth!
A. Mohtashami*, M. Pagliardini*, M. Jaggi
Workshop on Advancing Neural Network Training (WANT) (NeurIPS 2023) (Oral) • Paper • BibTex@article{DBLP:journals/corr/abs-2402-02622, author = {Amirkeivan Mohtashami and Matteo Pagliardini and Martin Jaggi}, title = {CoTFormer: A Chain-of-Thought Driven Architecture with Budget-Adaptive Computation Cost at Inference}, journal = {CoRR}, volume = {abs/2402.02622}, year = {2024} }
-
Fast Causal Attention with Dynamic Sparsity
M. Pagliardini*, D. Paliotta*, M. Jaggi, F. Fleuret
Workshop on Efficient Systems for Foundation Models (ICML 2023) (Oral) • Paper • Code • BibTex@misc{pagliardini2023faster, title={Faster Causal Attention Over Large Sequences Through Sparse Flash Attention}, author={Matteo Pagliardini and Daniele Paliotta and Martin Jaggi and François Fleuret}, year={2023}, eprint={2306.01160}, archivePrefix={arXiv}, primaryClass={cs.LG} }
-
Diversity through Disagreement for Better Transferability
Matteo Pagliardini, Martin Jaggi, François Fleuret, Sai Praneeth Karimireddy
DistShift Workshop on Distribution Shifts (NeurIPS 2022) • Paper • Code • BibTex@inproceedings{PagliardiniJFK23, author = {Matteo Pagliardini and Martin Jaggi and Fran{\c{c}}ois Fleuret and Sai Praneeth Karimireddy}, title = {Agree to Disagree: Diversity through Disagreement for Better Transferability}, booktitle = {{ICLR}}, publisher = {OpenReview.net}, year = {2023} }
-
Improving Generalization via Uncertainty Driven Perturbations
Matteo Pagliardini, Gilberto Manunza, Martin Jaggi, Michael I. Jordan, Tatjana Chavdarova
arxiv 2022 • Paper • Code • BibTex@misc{pagliardini2022improving, title={Improving Generalization via Uncertainty Driven Perturbations}, author={Matteo Pagliardini and Gilberto Manunza and Martin Jaggi and Michael I. Jordan and Tatjana Chavdarova}, year={2022}, eprint={2202.05737}, archivePrefix={arXiv}, primaryClass={cs.LG} }
-
The Peril of Popular Deep Learning Uncertainty Estimation Methods
Yehao Liu*, Matteo Pagliardini*, Tatjana Chavdarova, Sebastian U. Stich
Workshop on Bayesian Deep Learning (NeurIPS 2021) • Paper • BibTex@article{liu2021peril, title = {The Peril of Popular Deep Learning Uncertainty Estimation Methods}, author = {Yehao Liu and Matteo Pagliardini and Tatjana Chavdarova and Sebastian U. Stich}, booktitle= {NeurIPS Workshop on Bayesian Deep Learning}, year = {2021}, }
-
Improved Adversarial Robustness via Uncertainty Targeted Attacks
Gilberto Manunza*, Matteo Pagliardini*, Martin Jaggi, Tatjana Chavdarova
Workshop on Uncertainty and Robustness in Deep Learning (ICML 2021) • Paper • BibTex@inproceedings{ManunzaPagliardini2021, title = {Improved Adversarial Robustness via Uncertainty Targeted Attacks}, author = {Gilberto Manunza and Matteo Pagliardini and Martin Jaggi and Tatjana Chavdarova}, booktitle= {ICML Workshop on Uncertainty and Robustness in Deep Learning}, year = {2021}, url = {http://www.gatsby.ucl.ac.uk/~balaji/udl2021/accepted-papers/UDL2021-paper-096.pdf} }
* Indicates equal contribution.
Get In Touch
Feel free to reach out at: matteo.pagliardini@[epfl].ch
Or find me at EPFL in INJ340