[ad_1]

“In the beginning, everything was void, and J.H.W.H Conway began to create numbers.”

This is how mathematician John Horton Conway appears in Donald Knuth’s novella *Surreal Numbers*—a seemingly all-knowing force that brought all of the numbers into existence, whose methods were written down so others could deduce and explore them, too.

It’s a fitting description for Conway, who died April 11 at age 82 from complications of COVID-19. The legendary professor made many contributions to different areas of mathematics, with influential ideas that spilled over into quantum physics, philosophy and computer science. He inspired generations of students who encountered him anywhere from summer camps to undergraduate and graduate programs, and his inventive games and puzzles delighted the mass readership of Martin Gardner’s *Scientific American* columns.

“With the passing of John Conway the world of mathematics has lost one of its brightest stars,” Peter Sarnak, professor of mathematics at Princeton University and colleague of Conway’s, said in an e-mail. “His talent and genius were unequaled.”

I didn’t know Conway personally, but as a Princeton student I was aware of him as an eccentric campus celebrity. We’re not talking about the stereotype of an academic holed up in an office who doesn’t want to be bothered; in fact, Conway didn’t even sit in an office at Princeton, but could be found around the math department common room. Reading about Conway’s work and watching recorded lectures and interviews over the past few days, I appreciated that he shared his ideas with such enthusiasm, and in a way that could be broadly understood. I thought, “This is why I loved math in high school. This is what I thought it could be like.” I love that Conway ended up with a wealth of famous insights, games and puzzles because he just followed his own curiosities, and also enjoyed sharing the wonders of math with others.

“What separated him from other mathematicians whose work was also deep and broad and which extended over many decades was his free-spirited fun-loving and playful approach to everything,” Colm Mulcahy, professor of mathematics at Spelman College, in Atlanta, said in an e-mail. “He did very serious mathematics, but with a flair and passion that was quite unique.”

**GAME OF LIFE**

Conway is most famous in the public eye for his “Game of Life.” He was inspired by John von Neumann and Stanislaw Ulam’s concept of “cellular automata.”

The Game of Life has a grid of cells in which there are two states: alive and dead. The “alive” squares are one color, let’s pick blue, and the dead ones are another, let’s pick gray. But unlike games you’d play with friends, this is a “no-player” game. It’s a drama that you just watch unfold, and the longer you watch, you may notice complex structures forming that you never expected.

There are only two rules that each square plays by:

•If you’re dead, and you have three live neighbors, you get reborn (gray becomes blue). Otherwise you stay dead (gray).

•If you’re alive, and you have two or three live neighbors, you stay alive (blue stays blue).

Conway tinkered with the rules for “about 18 months of coffee times,” he says in a Numberphile video in 2014, and didn’t use any computers for it; it was the 1960s, after all. The result was that the configurations could not be predicted; they just kept going and changing. Even today, professionals and amateurs alike play with aspects of this game, marveling at different shapes and behaviors that emerge just by letting the squares go at it with these two rules.

Gardner, who had a popular *Scientific American* column called “Mathematical Games” for more than 20 years, popularized the Game of Life in a 1970 column in this magazine. At that time Conway was a professor at the University of Cambridge, England. Gardner subsequently became friends with Conway and published on Conway’s other games and flights of intellectual fancy, such as the Doomsday Algorithm, a method of finding what day of the week corresponds to any date in history.

Despite its widespread popularity for decades, Conway’s relationship to his own Game of Life was complicated; in his view, it wasn’t real mathematics. “It’s one incident in my mathematical life, and I shouldn’t be so annoyed about it,” he told Numberphile, “And I’m trying not to be.”

**SURREAL NUMBERS**

While he didn’t want to be known solely for the Game of Life, Conway has said that one of his proudest accomplishments was the discovery of “surreal numbers.” This idea inspired Knuth’s fictional portrayal of two students learning about Conway’s system through a stone tablet, on which the rules of numerical generation are described.

We as humans are most familiar with the whole numbers, or integers: 0, 1, 2, 3, 4, etc., on the positive side, and –1, –2, –3, –4, etc. on the negative side. But there are numbers in between each of them. There’s not just 2.5 but also 2.55, 2.555, 2.555, 2.5555, and so on. There are irrational numbers, like pi, that cannot be defined by fractions. All of these that are written numerically, even pi, are called real numbers.

This is where Conway stepped in: there are also infinitely small numbers and infinitely large numbers. You can add one to the whole numbers indefinitely, but each of those has a reciprocal that is extremely small. Each of the numbers so large you can’t name also has a square root, a cube root and so on, ad infinitum. The set of surreal numbers, a name coined by Knuth, is the entire corpus of real numbers plus the infinitely small and infinitely big.

Like many of his feats, Conway arrived at the surreal numbers after deeply pondering the rules of a game—in this case, the game of Go. The surreal numbers were “the greatest surprise of my mathematical life,” Conway said in a 2016 lecture at the University of Toronto.

“Conway waves two simple rules in the air, then reaches into almost nothing and pulls out an infinitely rich tapestry of numbers that form a real and closed field,” Gardner wrote in his book *The Colossal Book of Mathematics*. “Every real number is surrounded by a host of new numbers that lie closer to it than any other ‘real’ value does. The system is truly ‘surreal.’”

**FREE WILL THEOREM**

At the sofa by the blackboard where Conway liked to sit on the third floor of Princeton’s Fine Hall, he and fellow professor Simon Kochen would talk about math, astronomy, etymology and whatever Conway had been discussing recently with Martin Gardner. Their friendly chatting turned into a formal collaboration when Kochen mentioned a theorem involving quantum mechanics that he had worked on with Swiss mathematician Ernst Specker.

Conway was interested in the foundations of this so-called Kochen-Specker theorem, which relates to one of Einstein’s objections to quantum mechanics. As Conway and Kochen dug deeper into the foundation of these ideas, they formulated what they called the free will theorem.

Does everything that happen in the universe, down to the behavior of individual particles, depend on everything that has happened previously? Kochen and Specker’s original theorem says “no.” The free will theorem took this further.

Assume an experimenter can choose to measure a particle’s spin in one of 33 particular directions, and can make that choice independently of what’s happened previously. An intrinsic property, called the spin value, would be measured as “zero” or “non-zero,” and it would follow no pattern that could have been predicted in advance. Says Kochen: “the particle’s response is equally free and spontaneous, in the sense that its response is not determined by the past history of the universe.” In other words, “Assume some free will in order to get a lot more free will.”

More generally, Kochen, Specker and Conway’s ideas emphasize that if you had the information about the states of every particle in the universe up to this point, you would not be able to predict what their states will be a second from now.

Some have criticized this theorem as only applying to certain models involving determinism, but mathematicians and physicists alike have continued analyzing it and exploring its implications in recent years.

**A CAMPUS LEGEND**

Besides all of his accomplishments, Conway was his own institution at Princeton, where he became a professor in 1986.

In Fine Hall, the unadorned brown granite tower across the road from Princeton’s iconic neo-Gothic architecture, Conway gave up his office in the math department “so he could just stay in the common room and hold court,” sometime around 1991, Sarnak said (one former student, Stark Ledbetter, said she did once see Conway’s office—“completely cluttered”). When Christopher Smyth Simons, now an associate professor at Rowan University, was a graduate student at Princeton, he would play backgammon and other games with Conway every day in that third-floor lounge, where the department enjoys a daily afternoon tea break.

Despite Conway’s fame, he still taught courses to undergraduates who hadn’t decided on a major, and they were not traditional classroom experiences. Often wearing a T-shirt, he didn’t use a textbook, and began speaking as soon as he walked in, said Ledbetter, who took the professor’s single-variable analysis course freshman year. He always summarized what had been done in the course so far, which was helpful for the students, she said. By multiple accounts, he might throw a shoe at a window if he thought students were asleep. Tamara Broderick, now an associate professor at MIT, remembers Conway from when she was a student at Princeton and also at the Canada/USA Mathcamp where he taught middle and high school students. At the camp he “wore” a toy bike, one of the props he was known to carry, and told the kids, “If it doesn’t have a bicycle around its neck, it’s not John Conway,” Broderick said.

While some academics may grumble about teaching responsibilities, Conway truly loved it. “‘I’m principally a teacher,’ he said to me,” Kochen said.

And he was a notable guest at the department’s annual Pi Day event—sadly the only time I remember encountering him in person, back in 2005. As other students and I took turns rattling off the digits of pi we had memorized (I know, we’re nerds), Conway seemed to whisper the digits along with us. When called to join the contest, he announced that we contestants had recited the numbers too slowly, and proceeded to impress us with a rapid rendition of perhaps a couple dozen digits. More astonishing, though, is that he told us he had once known 1,111 digits of pi (putting my second-place win, with 158 digits, to shame). In a *Scientific American* column in 2016, he suggested an elaborate method to become a super pi memorizer involving the periodic table.

Such feats of memorization were in character. Kochen, the free will theorem collaborator, remembers that Conway was once reading the newspaper article that mentioned a country for which he didn’t know the capital. Kochen didn’t know it either. So, the two decided to learn all of the countries of the world and their capitals. “It was a magical time,” Kochen said.

Ledbetter, who had Conway as a senior thesis advisor from 2008 to 2009, still puts into practice three bits of advice Conway gave her: design your notation so that it’s easy to use; make even small edits to improve your writing if you’re given the chance; and give interesting names to new concepts so people will remember them. After graduation, she took nine years off from mathematics to pursue music, but now studies in a Ph.D. math program at the University of Washington.

“I always thought I would meet him again and tell him I actually am back to doing math,” she said.

Kochen and Conway chatted by phone a little over a week before Conway’s death—not about math, just about life. The two spoke about COVID-19 and Conway referred to it as “that damned virus.” Ultimately it caused his death, according to Kochen and others.

I tried to go deep on the free will theorem with Kochen. After about an hour, my head was buzzing with talk of probability waves, thermodynamics, and hidden variables. For a moment I didn’t understand how I could safely walk across my floor if atoms can lack positional values. Somehow, we got into quantum computers.

“John was very good at explaining things to a layman,” Kochen said, “so we need him now.”

[ad_2]

Source link