Consistent Output Coding Algorithms for Multi-Label Learning with Low-Rank Losses.
Harish G. Ramaswamy, Mingyuan Zhang, Shivani Agarwal, Robert C. Williamson.
In preparation for submission to Journal of Machine Learning Research (JMLR).

Consistent Multi‑Label Learning from Noisy Labels.
Mingyuan Zhang, Shivani Agarwal.
Under review, 2024.

On the Minimax Regret in Online Ranking with Top-k Feedback.
Mingyuan Zhang, Ambuj Tewari.
Preprint, under review, 2023.
[pdf][link]

[5] Multiclass Learning from Noisy Labels for Non-decomposable Performance Measures.
Mingyuan Zhang, Shivani Agarwal.
In Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
[pdf][link]

[4] Foreseeing the Benefits of Incidental Supervision.
Hangfeng He, Mingyuan Zhang, Qiang Ning, Dan Roth.
In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
Oral paper.
[pdf][link]

[3] Learning from Noisy Labels with No Change to the Training Process.
Mingyuan Zhang, Jane Lee, Shivani Agarwal.
In Proceedings of the 38th International Conference on Machine Learning (ICML), 2021.
[pdf][link]

[2] Bayes Consistency vs. H-Consistency: The Interplay between Surrogate Loss Functions and the Scoring Function Class.
Mingyuan Zhang, Shivani Agarwal.
In Advances in Neural Information Processing Systems (NeurIPS), 2020.
Spotlight paper.
[pdf][link]

[1] Convex Calibrated Surrogates for the Multi-Label F-Measure.
Mingyuan Zhang, Harish G. Ramaswamy, Shivani Agarwal.
In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
[pdf][link]