Google (GOOG, Financial) says its latest Gemini Deep Think model read the official IMO problems in plain English and wrote out full proofs in just four and a half hours. It scored 35 out of 42 points—the first time any AI has hit gold‑medal level at the Olympiad.
Not to be outdone, OpenAI quietly posted on X that its experimental reasoning model matched that feat. Three former IMO medalists checked the proofs and unanimously gave it the same 35‑point score, all under the same no‑internet, no‑tools conditions.
Google plans to share the model with a few trusted mathematicians before rolling it out to subscribers. OpenAI calls its gold‑level system purely experimental and won't release anything this powerful for several months.
This is a big deal because it shows AI can handle real, structured problem solving. We might soon see AI helping students learn advanced math, researchers verify proofs in a flash, or even new kinds of math competitions where humans and machines go head to head.