Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Arxiv, 2025
This work provides a comprehensive study of benign overfitting for linear maximum margin classifiers, discovers a phase transition for the noisy model which was previously unknown and provides some geometric intuition behind it. We further considerably relax the required covariate assumptions in both, the noisy and noiseless case. Our results demonstrate that benign overfitting of maximum margin classifiers holds in a much wider range of scenarios than was previously known.
Published in Arxiv, 2025
In this paper, we prove directional convergence of network parameters of fixed width leaky ReLU two-layer neural networks optimized by gradient descent with exponential loss, which was previously only known for gradient flow. By a careful analysis of the convergent direction, we establish sufficient conditions of benign overfitting and discover a new phase transition in the test error bound. All of these results hold beyond the nearly orthogonal data setting which was studied in prior works. As an application, we demonstrate that benign overfitting occurs with high probability in sub-Gaussian mixture models.
Published:
Poster session @Princeton Machine Learning Theory Summer School.
TA, University of Toronto, 2022
Teaching Assistant
TA, University of Toronto, 2023
Teaching Assistant
Instructor, University of Toronto, 2023
Instructor, Probability part of Doss Summer Bootcamp
TA, University of Toronto, 2023
Teaching Assistant
TA, University of Toronto, 2024
Teaching Assistant
Instructor, University of Toronto, 2024
Instructor, Operational Math part of Doss Summer Bootcamp
Instructor, University of Toronto, 2024
Instructor, Probability part of Doss Summer Bootcamp