Humanity in Math and Computer Science
One of the purposes of the Heidelberg Laureate Forum is to allow the young researchers who are attending to interact with the laureates as people, not as distant mythical figures. Monday morning’s talks (where morning is rather loosely interpreted) by Sir Michael Atiyah, Manuel Blum, Wendelin Werner, and William Kahan got me thinking about humanity in the practice of mathematics and computer science.
Sir Michael Atiyah opened the day with a talk on beauty in mathematics. At times, he was provocative. He mentioned Hermann Weyl’s quotation, “My work always tried to unite the truth with the beautiful, but when I had to choose one or the other, I usually chose the beautiful” and explained why he thinks Weyl was not being ironic at all. One of his main themes was that the search for aesthetic beauty in mathematics can be a better guide than apparent truth.
But his comments about human understanding and the search for truth resonated with me more. He said, “the aim of science is to organize knowledge so the human mind can comprehend.” It reminded me of what the late William Thurston, another Fields medalist, wrote in his famous article On Proof and Progress in Mathematics. He says that instead of, “How do mathematicians prove theorems,” or “How do mathematicians make progress in mathematics,” the question of what mathematicians accomplish should be, “How do mathematicians advance human understanding of mathematics?” Human understanding, not a new theorem, was his goal.
I have have thought about mathematics as a human art form for a long time, but I must admit I rarely think about the human aspects of computer science. Manuel Blum’s talk on good passwords, however, was explicitly about how humans can create passwords that they can generate quickly but that a machine cannot break, even if it has some knowledge of your other passwords.
When Blum says that a human has to be able to generate these passwords, he means it. He is not interested in theoretical constructions that no human could implement. Before you tell him about your password creation strategy, you must be able to prove that at least one normal human can learn to implement it without too much hassle. As a proof that his method works, he uses it himself. My co-blogger John Cook wrote about the details of this humanly computable, machine unbreakable (for the correct definition of “unbreakable”) algorithm.
After coffee, we gathered again for Wendelin Werner’s talk about randomness interpreted in continuous, rather than discrete, processes. The technical material in his talk was the least focused on humanity of all the talks, although he did mention voting systems and how to cheat them, two human pastimes. But he also told us a few anecdotes that made us see him as a person just like us. He began the talk by telling us of his first trip to Heidelberg, when he and his backpack arrived on an overnight train at 5 in the morning and discovered the glories of the German hotel breakfast because the only place open that early was a fancy hotel. At the end of the talk, he took a few minutes to remember his friend Oded Schramm, who died in a hiking accident, and encourage the audience not to take unnecessary risks.
As the saying goes, “to err is human; to really foul things up requires a computer.” William Kahan’s talk right before lunch was on the sobering problem of floating point errors in science and engineering. We trust our calculators implicitly, but sometimes they are incorrect. He described the errors that are most dangerous as ones that are “wrong enough to mislead but not wrong enough to raise suspicion” and gave several examples of real-world situations in which lives or dollars were lost because of floating point errors in calculations.
What should a program do when it detects an error? Some programs stop running and just display a generic error message, but this can have deadly consequences. The most heartbreaking example Kahan used in his talk was the crash of Air France flight 447 in June 2009. The pitot tubes clogged, leading to invalid speed readings. The autopilot switched over to manual control without indicating what sensors had failed, so the copilots didn’t know which readings could be trusted. Kahan believes that if the autopilot programs handled invalid data in a different way, the fatal crash may have been avoided. Kahan said that no one should write a program without thinking through the consequences of the way the program deals with errors like this. Coincidentally, Vanity Fair just published an extensive article about what went wrong on flight 447. The article mentions the issues with the autopilot system that may have contributed to the crash but seems to lay more of the blame on the humans in the cockpit than Kahan did in his talk.
The morning’s talks reminded me that you don’t have to look far to find the human elements of mathematics and computer science.