Information Design

Theory, Theory on the Wall…

Communications of the ACM, 2002. Viewpoint column. In press.

Felipe Castel
Information Design Atelier

Theory, Theory on the Wall, tell me who is the fairest of them all. How do we determine that a theory captures well the spirit [and hence the beauty] of a field? HCI for instance is a field of computer science that purports to guide designers in organizing the interaction with computers, and over the years, many tips and wisdom of the craft have accumulated to do just that. But yet, where is the theory in HCI that will pull all this together into a coherent picture?

Looking at the recent book Human-Computer Interaction in the New Millennium [ACM Press, 2002], one sees a lot of talk, but little substance. Maybe that just reflects our academic way of doing things nowadays, when the measure of an academic is often the quantity of talk rather than its quality or originality. But is that the gist of it, or maybe just an aspect of it?

Perhaps the problem of theory goes deeper than the sociology of the scientific community. Could it be that computing itself is just too unwieldy for any theory of substance? That might well be the case, oh not in areas such as hardware or foundations, but rather in areas such as HCI or software methods, areas more on the human side of the computing equation. Let’s see what we can make of it.

Anything dealing with the human connection must rely on the underlying human sciences, in particular psychology. And yet, theory building in psychology is fraught with difficulty. I speak here of scientific theory building, which has an integrative goal, not of wild speculations of an interpretive nature, such as psychoanalysis for instance. Speculation in theory is important and needs to be encouraged, but factual integration remains an essential aspect of theory, not to be forgotten.

Theory building in psychology is fraught with difficulty. Take learning theory for instance. John Anderson’s ACT theory of learning is perhaps the most well developed theory in the field and accounts extremely well for the learning interactions that take place within highly programmed learning environments, such as those of computer-based tutors. But it will not handle the looser learning environments typical of traditional learning settings.

Attempts to generalize a theory beyond its original contexts of study, a natural inclination in theory building [to be resisted of course], can lead to ridicule, as happened to old man Skinner, who insisted his behaviorism explained all learning. The later demise of programmed learning as an instructional technique showed the boundaries had been overstepped.

Theory does seem to succeed well in the physical realm, where physical laws seem robust and unification proceeds with gusto. Seeking unifying models of phenomena is after all the prime role of theory. Why so successful there? A question of much greater funding? Or simpler phenomena? Although medical science does show us grappling with tremendous complexity.  And the applied side of physics, engineering, is short of theory and relies on the underlying physics for its models.

Computing itself is an aspect of engineering, dealing however with the virtual realm. Computing deals with representations, be they complex simulations or simple numbers. While it has important consequences in the physical and mental realms that we all operate in, computing’s virtual realm [see the February 2000 issue of CACM] is artifactual, that is, designed. And as such, constrained. In its constrained areas, computing theory is successful.

But yet, because of its nature as an abstract tool representing and manipulating things out there in the world, it remains as creative, open and artistic as those phenomena it represents. And so, it must tie into the fabric of human psychology if we humans are to find it at all useful. It must deal with HCI. And it must confront the challenges of HCI.

Human complexity is of course legendary, hence the difficulty. Wouldn’t computing be great, many a software engineer has asked, if it weren’t for those users? All in good jest, of course, but something more fundamental may well lie hidden underneath the jesting.

On the one hand, we accept the self-importance of humans, as they insist on being in charge and in dealing with all the little decisions that must be made along the way of accomplishing some task. On the other hand, the story of computing is one of the distancing of computing itself from initially the programmer and later on from the user, who is eventually quite happy to hand over the nitty-gritty as long as control is clear.

And so, we come to more and more computing being hidden under the hood, the driver in control but uninterested in the operation of the engine. As the technology of agents matures and expands, we will enter a new chapter in the history of computing. In particular, HCI will expand to encompass interactions with agents and among agents. Computing will deal a little less with human interactions. Not that concerns will disappear, though – ethical and spiritual issues will proceed to the fore with constant questioning about the human role in all this.

It might appear that agent interactions might be simpler than human interactions, and hence easier to build theory around. Not so, for we will seek to build those agents with as much intelligence as we can, eventually providing them with as much autonomous agency as we dare. We will not reduce the decision context, but rather continue to populate it with further decision-makers.

And where HCI will continue to thrive, it will likely deal with psychological issues touching on interests of the user and the psychological environment of the task more than the traditional issues of cognitive capacity and sensory interaction.

The prospects for theory in all this? The difficulties seem to have just expanded manifold, and with it the hope for general theories within computing. As we increase the complexity of our world [just contrast today’s work environment with that of a century ago, for instance], increased contextualization of theory may be inevitable. Despite the fact that, deep down, all interrelates and thus is unified.

Theory, Theory on the Wall… there is no magic mirror after all. But then, who knows what non-human-centric computing will come up with, theory-wise?