On the second day of the Heidelberg Laureate Forum, Fred Brooks took the stage to talk virtual environments. We were lucky enough to interview him last year, when we focused mostly on software development. Because he is so famous in that area, we were sheepishly surprised to hear that he’d been focusing on virtual reality for the last number of decades. This year, we interviewed him again, and made sure to dig into that topic some more. Here, I touch on aspects of both our interview and his lecture.
In his lecture, Brooks opened with a quote from Ivan Sutherland, which he had also shared with us last year:
Don’t think of that thing as a screen; think of it as a window. Through that window, one looks into a virtual world.
Sutherland’s challenge for virtual reality including achieving a multitude of difficult benchmarks:
- Being able to provide a completely immersive experience to users
- Improving image generation so that the virtual world looks real, and that the world can stay stationary to those with a moving viewpoint
- Allowing users to directly manipulate virtual objects
- Ensuring manipulated virtual objects move and react realistically
- Maintaining the world model in real time
- Providing a virtual world that both feels and sounds real
To realize the vision for virtual reality, achievements are required in both displaying and modelling the virtual world, as well as in rendering, tracking, and system power.
And even if we do achieve all this, so what? Is there any real virtue in virtual reality? This is the question that Brooks has spent a lot of energy on in the last 50 years.
In his lecture, Brooks outlines how his lab has explored how to make virtual reality “work” – that is, cause the user to behave as if they are in the real world. Their measures include training transfer; subjective questionnaires; contemporaneous self-reports; behavioural measures that compare performance in virtual versus the real world; physiological surrogates for presence; and engineering-based, not science-based, psychology. You can read about the group’s fascinating research and peruse their publications online.
Brooks ends his talk with a summary of where we are now, and some hot open technical questions. The latter can be summarized as ‘What is the best way to interact with virtual worlds?’ and ‘How can we build displays and tracking for teams?’
In our interview this year, we discussed in more depth some of the possible answers to the first question. The recurring theme was that, in many if not most cases, mixed reality would be key.
For example, Brooks shared in detail his experience with a flight simulator (he also touched on this during his lecture). Some parts of the simulator must be digital; otherwise, it would be too dangerous to allow pilots-in-training to perform downward spirals, and it would take too long to practice landing at all the world’s airports. At the same time, the controls of the cockpit must look and feel real. The pilot must get a tactile feel for where everything is, and receive accurate haptic feedback from using the controls.
Several years ago, I did a study on the cognitive advantages of augmented reality for learning. My perspective for the paper was to go light on the digital layer and heavy on reality. However, I believe that many of my insights apply to mixed reality as well, and help us understand why a fully virtual world might not always be the best solution, particularly for the learning and training applications Brooks is most interested in.
Indeed, it will be interested to see whether the most successful future virtual environment applications do end up being fully virtual or mixed, especially when considering applications set up outside the home.
Brooks’ lab is more or less shut down now that he is retired, but work in the area lives on. I for one look forward to seeing Sutherland’s and Brooks’ bold visions for virtual environments come to fruition, hopefully within my lifetime.