How does mathematical creativity work? You wrestle with a problem for days, trying different pathways and getting nowhere. Then, all of a sudden, when you have stopped thinking about it – walking home, preparing a meal, doing the dishes – the missing idea arrives in a flash. You can’t say where it came from, but once the idea is there it feels inevitable. Mathematicians learn to live with the awkward feeling of not knowing how they do the trick.
When I finished secondary school, I wanted to become a neurobiologist, believing that I would eventually understand self-consciousness and the thought process – until my thesis supervisor told me that questions like these lie outside the realm of science and that I had better talk to philosophers. Neurobiologists study the brain as an organ composed of cells whose working they try to understand. I remember being rather upset, and eventually deciding to turn my hobby – working through mathematics syllabi in the evenings – into my profession.
You can imagine my sense of excitement when I first saw GPT3. I vividly remember my first reaction: they had clearly lifted the veil of mystery of how we – our brains – generate language and communicate with the outside world. Could it be that we – our brains – are nothing but next-word predictors responding to sensory input, after having gone through an extensive learning period called childhood and education? Indeed, the architecture of convolutional networks is inspired by the layered structures found throughout our neocortex. A thought-provoking account of this perspective is presented in Max Bennett’s recent popular science book about the history of intelligence.
Could it be that we – our brains – are nothing but next-word predictors responding to sensory input?
The arrival of large language models (LLMs) sheds an intriguing new light on an old debate. Do the objects studied by mathematicians – numbers, functions, spaces – enjoy an existence independent of human minds, or are they products of cerebral invention? This is the central question of the Connes–Changeux debate. For mathematician and Fields Medallist Alain Connes, mathematics is about discovery rather than invention: the fundamental objects of mathematics, such as natural numbers, are facets of a landscape beyond space and time that we try to chart with our proofs. If intelligent life evolved on another planet, it would arrive at the same theorems, possibly with different notations. For the eminent neurobiologist Jean-Pierre Changeux, the nervous system is shaped by evolution and learning. He doesn’t see why we should posit Connes’ ‘Platonic heaven’. In Changeux’s view, natural numbers and mathematical notions in general are stabilised patterns in brains trained by education and the outside world.
The divergent positions in this debate frames current-day arguments about machine creativity remarkably well. At a recent workshop in Leiden, mathematicians involved in formal proof checking met with philosophers, cognitive scientists, and scientists from leading AI companies. Many mathematicians were in Connes’ camp, believing that AI will never become creative. The others mostly sided with Changeux’s naturalism: if a brain can do it, AI can do it, too. It is now just a matter of time to figure out how.
Recent findings in cognitive neuroscience suggest a ‘creativity sweet spot’ at sleep onset (the so-called stage N1) – a state of mind not unlike those moments when mathematical ideas tend to appear. Interestingly, LLM temperature schedules precisely embody this creative regime of a ‘let go, then tighten’ cycle. First you allow the model to wander through conceptual space, and form unexpected combinations, then you reinstate formal constraints and coherence.
Who will win the Connes-Changeux debate? Time will tell. For now, I see a creative contribution to the workload discussion. The findings confirm what I already knew. Whether or not we are an LLM, nothing enhances creative work better than a regular dose of rest. Let go, then tighten.
Comments are closed.