Yearly, the international locations competing within the Worldwide Mathematical Olympiad (IMO) arrive with a booklet of their greatest, most authentic issues. These booklets get shared amongst delegations, then quietly disappear. Nobody had ever collected them systematically, cleaned them, and made them obtainable, not for AI researchers testing the bounds of mathematical reasoning, and never for the scholars world wide coaching for these competitions largely on their very own.
Researchers at MIT’s Pc Science and Synthetic Intelligence Laboratory (CSAIL), King Abdullah College of Science and Expertise (KAUST), and the corporate HUMAIN have now performed precisely that.
MathNet is the most important high-quality dataset of proof-based math issues ever created. Comprising greater than 30,000 expert-authored issues and options spanning 47 international locations, 17 languages, and 143 competitions, it’s 5 occasions bigger than the next-biggest dataset of its variety. The work will likely be introduced on the Worldwide Convention on Studying Representations (ICLR) in Brazil later this month.
What makes MathNet completely different is just not solely its dimension, however its breadth. Earlier Olympiad-level datasets draw virtually completely from competitions in the USA and China. MathNet spans dozens of nations throughout six continents, covers 17 languages, contains each text- and image-based issues and options, and spans 4 many years of competitors arithmetic. The objective is to seize the complete vary of mathematical views and problem-solving traditions that exist throughout the worldwide math neighborhood, not simply essentially the most seen ones.
“Each nation brings a booklet of its most novel and most artistic issues,” says Shaden Alshammari, an MIT PhD pupil and lead creator on the paper. “They share the booklets with one another, however nobody had made the hassle to gather them, clear them, and add them on-line.”
Constructing MathNet required monitoring down 1,595 PDF volumes totaling greater than 25,000 pages, spanning digital paperwork and decades-old scans in additional than a dozen languages. A good portion of that archive got here from an unlikely supply: Navid Safaei, a longtime IMO neighborhood determine and co-author who had been amassing and scanning these booklets by hand since 2006. His private archive shaped a lot of the spine of the dataset.
The sourcing issues as a lot as the dimensions. The place most current math datasets pull issues from neighborhood boards like Artwork of Drawback Fixing (AoPS), MathNet attracts completely from official nationwide competitors booklets. The options in these booklets are expert-written and peer-reviewed, and so they usually run to a number of pages, with authors strolling by a number of approaches to the identical downside. That depth provides AI fashions a far richer sign for studying mathematical reasoning than the shorter, casual options typical of community-sourced datasets. It additionally means the dataset is genuinely helpful for college kids: Anybody getting ready for the IMO or a nationwide competitors now has entry to a centralized, searchable assortment of high-quality issues and labored options from traditions world wide.
“I bear in mind so many college students for whom it was a person effort. Nobody of their nation was coaching them for this sort of competitors,” says Alshammari, who competed within the IMO as a pupil herself. “We hope this offers them a centralized place with high-quality issues and options to study from.”
The crew has deep roots within the IMO neighborhood. Sultan Albarakati, a co-author, presently serves on the IMO board, and the researchers are working to share the dataset with the IMO basis immediately. To validate the dataset, they assembled a grading group of greater than 30 human evaluators from international locations together with Armenia, Russia, Ukraine, Vietnam, and Poland, who coordinated collectively to confirm 1000’s of options.
“The MathNet database has the potential to be a superb useful resource for each college students and leaders in search of new issues to work on or in search of the answer to a troublesome query,” says Tanish Patil, deputy chief of Switzerland’s IMO. “While different archives of Olympiad issues do exist (notably, the Contest Collections boards on AoPS), these sources lack standardized formatting system, verified options, and vital downside metadata that subjects and principle require. It is going to even be fascinating to see how this dataset is used to enhance the efficiency of reasoning fashions, and if we are going to quickly be capable to reliably reply an vital problem when creating novel Olympiad questions: figuring out if an issue is really authentic.”
MathNet additionally features as a rigorous benchmark for AI efficiency, and the outcomes reveal a extra sophisticated image than current headlines about AI math prowess may recommend. Frontier fashions have made extraordinary progress: Some have reportedly achieved gold-medal efficiency on the IMO, and on customary benchmarks they now clear up issues that will stump most people. However MathNet reveals that progress is uneven. Even GPT-5, the top-performing mannequin examined, averaged round 69.3 % on MathNet’s essential benchmark of 6,400 issues, failing almost one-in-three Olympiad-level issues. And when issues embody figures, efficiency drops considerably throughout the board, exposing visible reasoning as a constant weak level for even essentially the most succesful fashions.
A number of open-source fashions scored 0 % on Mongolian-language issues, highlighting one other dimension the place present AI programs fall brief regardless of their total energy.
“GPT fashions are equally good in English and different languages,” Alshammari says. “However most of the open-source fashions fail utterly at less-common languages, resembling Mongolian.”
The variety of MathNet can also be designed to deal with a deeper limitation in how AI fashions study arithmetic. When coaching information skews towards English and Chinese language issues, fashions take up a slender slice of mathematical tradition. A Romanian combinatorics downside or a Brazilian quantity principle downside might strategy the identical underlying idea from a very completely different angle. Publicity to that vary, the researchers argue, makes each people and AI programs higher mathematical thinkers.
Past problem-solving, MathNet introduces a retrieval benchmark that asks whether or not fashions can acknowledge when two issues share the identical underlying mathematical construction, a functionality that issues each for AI improvement and for the maths neighborhood itself. Close to-duplicate issues have appeared in actual IMO exams over time as a result of discovering mathematical equivalences throughout completely different notations, languages, and codecs is genuinely onerous, even for knowledgeable human committees. Testing eight state-of-the-art embedding fashions, the researchers discovered that even the strongest recognized the right match solely about 5 % of the time on the primary strive, with fashions often rating structurally unrelated issues as extra related than equal ones.
The dataset additionally features a retrieval-augmented era benchmark, testing whether or not giving a mannequin a structurally associated downside earlier than asking it to unravel a brand new one improves efficiency. It does, however solely when the retrieved downside is genuinely related. DeepSeek-V3.2-Speciale gained as much as 12 proportion factors with well-matched retrieval, whereas irrelevant retrieval degraded efficiency in roughly 22 % of instances.
Alshammari wrote the paper with Safaei, HUMAIN AI engineer Abrar Zainal, KAUST Academy Director Sultan Albarakati, and MIT CSAIL colleagues: grasp’s pupil Kevin Wen SB ’25; Microsoft Principal Engineering Supervisor Mark Hamilton SM ’22, PhD ‘25; and professors William Freeman and Antonio Torralba. Their work was funded, partly, by the Schwarzman School of Computing Fellowship and the Nationwide Science Basis.
MathNet is publicly obtainable at mathnet.csail.mit.edu.


