Two days ago the Heidelberg Laureate Forum came to an end, with a grand farewell dinner at Heidelberg Castle. Back in July, when I was invited to join the blog team for the Heidelberg Laureate Forum, I didn’t accept immediately. I was busy, and a week in Germany would definitely affect my writing schedule. For several days I sat on the fence, but what finally decided me was… the fact that we would have the closing dinner in a castle! “How many times in my life will I have the chance to eat dinner in a castle?” I asked. And so I said yes.

In retrospect, I can tell you that it was worth it. Both the dinner in the castle, and the whole Heidelberg Laureate Forum, more than lived up to my expectations.

The beginning and ending of the forum’s scientific program were absolutely perfect. The first talk, on Monday morning, was a look backward: Raj Reddy talked about who invented computing. The history was much more complicated than I ever dreamed, but what really stuck with me was that “computing” is not just one thing. There were many insights, some far from obvious, that had to be combined to make up the experience that we today call by one name, “computing.”

The final talk, on Friday afternoon, was by John Hopcroft, and appropriately it was about the future of computer science. He said that students sometimes tell him that he was lucky to get his degree in 1964 (in electrical engineering rather than computer science, because computer science didn’t exist yet). He and many of the laureates had the opportunity to lay the foundations of the discipline from the ground up, inventing the first compilers and operating systems and so on. Surely there will never be such an opportunity again.

But Hopcroft argued that now is equally a time of great opportunity, and the students of today are lucky to be graduating in 2013. He mentioned new scientific fields like computational biology, sparse vectors, and the study of communities (what is a community, anyway?). All of these are just beginning to be explored.

For my mathematically minded readers, who may be feeling left out because the conference began and ended with computer science, Hopcroft happened to mention an intriguing subject that caused more conversation afterwards than anything else in his talk: the ways in which high-dimensional space are different from low-dimensional space.

The main puzzle is this: What is the hypervolume of an N-dimensional hyperball of radius 1, as N gets large? It seems as if it should get larger and larger. A beach ball is somehow rounder than a CD, and a 4-dimensional ball should be even rounder yet. Indeed, the volumes do go up at first. The area of a unit disk is pi (3.14159…). The volume of a unit sphere is 4/3 times pi. The hypervolume of a hypersphere, if I remember correctly, is 1/2 times pi squared. (Someone correct me if I’m wrong.)

But remarkably, somewhere around the sixth dimension the hyper-hyper-volumes start going down, and in the limit they approach zero! In other words, the unit N-ball is tiny compared to the unit N-cube. How can this be?

There are several answers. First, a physicist’s answer would be that you can’t compare them. The units are different, so it’s like comparing apples to oranges. However, that’s not very satisfying. The numbers really do mean something. As Hopcroft said, if you pick two random vectors on an N-dimensional sphere they are very likely to be perpendicular, or nearly so. That’s not true on a low-dimensional sphere. There is a genuine phenomenon here that needs explaining.

Second, the mathematician’s answer would be to compute an integral. I’ve seen this done, and at the end it’s not very satisfying. You get this formula for the N-volume of the N-sphere that involves the gamma function, and you aren’t any wiser than you were before. It’s just like Cedric Villani said in the round-table panel on mathematics on Tuesday. If the proof doesn’t add to your understanding, it wasn’t really worth doing.

(Well, I don’t completely agree. It does help you in the sense that you can just accept that this is true, and then it leads you to several other counterintuitive insights about the N-sphere and N-ball. But let’s move on.)

It would be nice to have a truly intuitive explanation of why the N-dimensional ball “shrinks.” I’ll give a sketchy argument that is still not completely intuitive — it’s more algebraic than geometric. But perhaps one of you can do better?

[Non-math types can stop here, but if you’ve had first-year calculus you should be able to follow the rest of this post.]

According to Pappus’ theorem, the volume of a region formed by rotating an area A about an axis l is equal to the area of A times the distance traveled by the center of mass of A. This continues to be true in higher dimensions. So in the case of the N-ball, the volume should be the (N-1)-volume of **half** an (N-1)-ball, times 2 pi, times the altitude of the center of mass of **half** an (N-1)-ball.

Where is this center of mass? Hopcroft gave us a clue yesterday when the said that the mass of an N-ball is concentrated around the equator. What this means is that the center of mass of a **half** N-sphere is very low.

How can we see this? Well, the altitude of the center of mass is given by averaging the N-th coordinate (x_N) over the half N-sphere. I don’t want to do this integral, because that would not be intuitive. Instead I want to estimate it. By Schwartz’s inequality, the average of x_N is bounded above by the root-mean-square (RMS) average of x_N, obtained by averaging x_N^2 and taking the square root. (Even though the ancient Greeks didn’t have calculus, the arithmetic mean-RMS inequality was familiar to them.)

Now here’s the key thing. The average of x_N^2 over a half-ball is the same as the average of x_N^2 over a whole ball. And that is, by symmetry, the same as the average of x_1^2, x_2^2, etc. Thus the average of x_N^2 is just 1/N times the average of (x_1^2 + x_2^2 + … + x_N^2). This, in turn, is the average of r^2. And the average value of r^2, I believe, approaches 1 as N approaches infinity. Certainly it’s bounded above by 1, because r itself is bounded above by 1.

Putting it all together, the mean of x_N^2 is less than 1/N times 1, and so the RMS of x_N is less than 1/sqrt(N). Therefore the average of x_N over the half N-sphere is less than 1/sqrt(N). Finally, by Pappus’ principle, this means that the N-volume of the N-sphere is less than pi/sqrt(N) times the (N-1)-volume of the (N-1)-sphere. So for N greater than about 10 or so, this will start decreasing, and in the limit the N-volume will have to go to zero.

A I said, I don’t consider this a truly intuitive explanation, but I think it is convincing, does add a little bit to my understanding, and doesn’t involve any actual computation or gamma functions.

Thanks for the neat way in which you carried the proof I sketched in our conversation in the ATC lounge to its conclusion! Schwartz’s inequality is a neat touch; I’d not thought of that.

Very nice explanation ,using my compatriot’s (Pappus-Πάππος) theorem and some “intuitive calculus” !

Permit me to suggest an other “intuitive” approach ,which hardly uses any calculations, just some basic Arithmetic (that means: Combinatorics)

Well, it is a fact that even for big numbers of n (dimension) the unit n-sphere inscribed in the unit n-cube is still the LARGEST possible. Its diameter is always equal to the “side length” of the cube, and the sphere’s surface touches every “face” of the cube (as “Face” for an n-cube, is to be considered of course a (n-1)-cube !)

So,why the “shrinkage” of the sphere for higher and higher dimensions? It is simple a matter of Arithmetic! The sphere always occupies the “central area” /center of the n-cube ,but there is not much “center” left, as n goes to infinity. Most of the cube’s volume “escapes” (centrifugally, sort of) towards the corners/”vertices”.For example a 100-cube has 200 “faces” but 2^100 vertices! Let’s consider that we produce the “diameters” of the cube, namely all the straights that pass through the Center of the cube. There are two main types,as far as “length” is concerned. The smallest ,let’s call them “good” or “short” which start from the center of a face ,pass through center C of the cube and end on the center of the opposite face. For the usual 3-dimensional cube that prescribes a unit sphere (r=1) , this length of the short diameter is 2 (since 2 is both the diameter of the sphere and the edge length of the cube). That shows that a “good/short” diameter is spreading entirely (its whole length) INSIDE the sphere. A “bad/long” one from the other side, are these diameters of course which start from a vertex—trough center C—to the opposite vertex. For an n-cube with side length=2 , the long “bad” diameter has length= 2sqrt(n) . Thus, for example for a 100-cube the long/bad diameter has length equal to 2*10=20. We notice that just 1/10th of this length lies in this case inside the sphere!

Moreover, there are 100 “good/short”/full” diameters , but 2^99 long ones! (“empty/bad”). The above argument “by Induction” ,may be perhaps not so mathematically strict, but it is logically very powerful, I think. What do you think of this informal “proof”? 🙂

George (Greece)