May 12th, 2009 ontoligent
Comparative ontology asserts that humans already have ontologies, and that machine ontologies are both projections of human ontologies (those of the numerati) and material agents that intervene in the ongoing reproduction of ontologies (everyone else’s). Developers of ontologies for the web of linked data would do well to understand the nature of human ontologies, as well as they way machine ontologies intervene in the ongoing construction of social life.
Human ontologies are not like ROM programs, hard-wired into our brains and executed without modification; they are designed to be reprogrammed through engagement with the world. They are one of our most effective adaptive traits.
Ontologies are adaptive
Anthropologists have studied ontologies in the wild for a long time, under the various categories of “structure,” “symbolism,” “culture” and “collective representations.” One of the most important contributors to the study of ontology is the American cultural anthropologist Marshall Sahlins.
Sahlins began as a cultural materialist but had a road to Damascus experience in the 1970s in which he got culture. You may recognize his name as the unfortunate target of fellow anthropologist Gananath Obeyesekere, who criticized Sahlins’ interpretation of the events leading to Captain Cook’s death in Hawai’i as orientalist. In fact, Obeyesekere’s criticism was an exercise in occidentalist stereotyping and, in any case, Sahlins’ control of the material eventually proved his critic’s position incoherent.
Sahlins’ principal theoretical contibution to cultural anthropology has been to retrieve the concept of cultural structure of the ahistorical, formalist, and mechanistic conception developed by Lévi-Strauss, whose own work on mythology belies his more theoretical pronouncements. Rather than separating structure from event (and history), and locating the former deeply within a universal mind-like a camshaft responsible for the jigsaw puzzle of culture-Sahlins focuses on what he calls the “structure of the conjuncture” of structure and event. History emerges as a culturally distinctive second-order structure that results from the ongoing work of categories in praxis. So categories have a structure, but that structure undergoes reevaluation and change as it is applied to the world.
In this, Sahlins is consistent with both Victor Turner’s understanding of processual structure in ritual behavior, and Bourdieu’s concept of the habitus which mediates, through improvization, the “dialectic of objectification and embodiment.” In fact, I believe that the revised structuralism developed by these anthropologists (and others) is coherent enough to deserve a name; I call it “neostructuralism.”
In Islands of History Sahlins describes the process of cultural (ontological) change in terms of the “risk of reference”: as cultures classify things in the world-as they deploy ontologies-they also put these ontologies at risk. For things in the world do not always behave as classified, or planned. Even the sun has an occasional eclipse. Although the keepers of culture-from priests to grandmothers-try to enforce adherence to the categories, the behavior of things will inevitably contradict the categories and call for their revision. Sahlins reads the Hawai’ian’s classification of Captain Cook as Lono as just such a world changing event.
Ritual is one mechanism humans use to synchronize the world with world view. As people grow, for example, and change statuses, rites of passage are used to mediate this “contradiction” and reclassify people so that they can fit into the system. Another mechanism is prophecy, where the reverse is true-world views are aligned with a world that has changed. Millenarian movements are the classic example of this: a prophet emerges who can make sense of the new in terms of the old, but changes the old in the process.
Rituals and prophetic movements are the original forms of change management.
This is the ongoing work of culture. Cultural reproduction is never mechanical. That is one reason we humans have history. There is always a disproportion between words and things, plans and situations.
Texts, as forms of discourse, can be likened to rituals and prophetic movements. Novels in particular are efforts to both makes sense of an influence the world, a task in which they often succeed. They deploy a set of categories that make sense, to the author at least, in a certain time and place. The risk of reference works at various levels-from the basal meanings of words out of which tropes are created, to the description of scenes in which the unsaid is shared among a presumed audience, to more elaborate allegorical mappings of fictional characters to real persons. But the referrential risk of textuality is compounded as the message is removed from its original personal, cultural, and historical contexts, and the world of the text is forced to fit new contexts for new readers. Hermeneutics arose as a method to retrieve meanings lost in this way; Roman Law and the Christian Bible being two major examples of distanced texts being applied and reapplied to new situations. The French philosopher and hereneutic theorist Paul Ricoeur called the result of this risk the “surplus of meaning” in a text, and saw it at as an opportunity for a kind of ontological excavation.
Databases (and the point of this post)
Now, a data model, such as a set of tables and fields in a relational database, an XML schema of elements and attributes, or an RDF vocabulary of classes and properties, is a plan, a schema of classification. And database applications, like rituals and texts, have their own forms of referential risk to contend with. They classify the world and, in the process, both effect the world they classify and open themselves up for revision by that world as it changes.
For example, the categories produced by a requirements elicitation process for an application designed to improve some workflow, and encoded in a database that sits at the bottom of an application stack, may not accurately represent the workflow as it is actually practiced, and as it will inevitably change as new developments take place-changing personnel, clients, strategic plans, etc. The database, then, is put into a situation-the situation of the conjuncture-into which its categories are at risk.
In this situation, databases are like texts-they are built on the armature of a hard-coded ontology, and they can move beyond their original domain of applicatibility.
But unlike most texts, and very much like sacred texts, database applications (and their administrators) are usually given a central position within an organization. They are often deployed as key elements of an enterprise architecture that calls the institutional shots. Thus they can insulate themselves from referential risk. They can force conformity to their logic-as Michael Wesch’s New Guinea villagers redesigned their settlement pattern to conform to the government census-or they can produce a black market of behaviors in an organization that bypasses the database governed workflow. This is what faculty do who are forced to use an LMS but would rather use Google Docs.
Comparative ontology can help here. If we view ontologies as always situated, then we should (1) design systems for maximum flexibility and adaptabilty, and (2) learn a lesson from the ritual life of peoples around the world and throughout history: engage our ontologies in constant reevaluation and modification, making the world (of our organizations) fit where appropriate, and also refining the categories to fit the world.
To meet the first challenge, we shouldn’t create overwrought ontologies, but rather focus on just enough classification to achieve the effects we need. Usually, the effects we are most concerned with are connecting people to people, people to information, and information to information, in as few links as possible.
To meet the second challenge, we may want to refine what we mean by “social operating system”-for that is precisely what a ritual system is. Maybe it’s time to follow McLuhan’s advice and exploit the ritual effects of the electric, in order to mitigate and shape the more dangerous effects of the electronic. When we build ontologies, maybe we should also be thinking of the physical and virtual spaces in which they will be deployed, and the material and digital artifacts that will be their vehicles of expression.
Posted in Comparative Ontology, Edupunk Reading Group, software design, theory | No Comments »
May 11th, 2009 ontoligent
I am all for user-driven design methodologies. My instinct is to distrust the Central IT ethos of “we know better” because “we think more rationally about things” and all that. That perspective is based on a simultaneous over-valuing of a linear, rational notion of process (“planning”) and a grudging acceptance of user behavior as “cultural” and therefore outside the scope a requirements gathering process.
The term “non-functional requirements” speaks volumes and captures the Central IT attitude very well. Under that category, the whole point of effective software design is swept under the rug. We know that software will be most effective when it adapts to user behavior and vice versa, but we often sidestep that issue, hoping for incremental, evolutionary changes to produce the desired effects over the long run. We miss the opportunity to innovate, leaving that to the less timid.
But I also find that user-centric methodologies are based on naive assumptions about what users want, or who The User is, or what the point of the user research is in the first place. Unless you have a very restricted audience for your software-and admittedly one often does-it is very difficult to translate the views of a few people, whether captured by focus group, survey, or even participant-observation, into generalized principles for an application. Ultimately, good design is what works, and we retrospectively attribute success to our process. But we really have no clue.
What is it that one is capturing by user-centric research, anyway? The attitudes and dispositions within a class of individuals? This can’t be it. User attitudes and mental models are highly variable, and they are mutable because humans are adaptive, more than we think. If you build software based on some static notion of what users want, what they say they want, you will miss the effect software has on redefining what they really want. This is because users inhabit cultural environments, and software inevitably has effects on those environments. If you focus too much on the abstract user-what’s “in” the user-you will often have the feeling of the goal posts moving. Or you may end up dismissing the user altogether as fickle and irresponsible, and go on with your own design ideas. If you design software for a living, I am sure you know what I am talking about.
I think the proper focus of user-centric software design has to be the user-in-context. That is, not the user but the Situation. But sutuation defined in a specific, rigorous way. Situation as the objective, institutional framework of power and infrastructure in which people work. This is difficult terrain to study, hedged in as it is by all sort of taboos and misrecognitions that keep the social gears moving. Let me give you an example.
One of the areas where the Central IT software design ethos dominates is in the area of document management. Two factors drive the design of solutions: (1) developers assume (know) that paperlessness is a Good Thing, and (2) the paper-based workflows that users are enmeshed in are so crufty, complex, and idiosyncratic that it is impossible for users to describe them in enough detail to re-engineer them. The result is that the digital document management solution will almost always build around people’s behavior, or else it will break workflows where it has to. So, instead of stepping back and rethinking what the data flows entailed by a paper form entail, or taking advantage of the metasocial moment and asking Why are We Doing This in the First Place, document-logic is reproduced in the software. The efffect is not to reproduce the old way, and make it more efficient. It is something unpredictable and bound to have hidden consequences, not all of which can be good. Most likely, we’ve preserved the notorious stupidity of bureaucracies and have ensured its continued survival in a mutant and more powerful form. Because once categories get encoded in institutional databases, the tail wags the dog. Think health insurance.
So, what to do? I suggest that we pursue theory-driven design. We actually try to make sense of the sociology and anthropology of bureaucracies and operationalize the best ideas in these discourses as design principles. We think of how software behaves as an assemblage of artifacts in a living cultural environment. This is not social engineering, nor is it to tread the tired path of “organizational behavior,” a field that is too closely tied to the executive perspective. It is to pursue a rich, empirical understanding of software in the wild, or at least, the office.
Theory-driven design is not anti-empirical. It is the opposite: for a good theory generates testable hypotheses. It gives a framework to user-centric research beyond the unanswerable quest for what users really want. As they say, there is nothing so practical as a good theory.
A good starting point might be to take Ted Nelson’s ideas about documents and hypertext and combine them with, say, David Graeber’s critical anthropology of bureaucracy. Not to condone the anarchism of Graeber, but to lever the authenticity of perspective he brings to a discussion about the role of documents in the organization. Reading his essay, “Beyond Power/Knowledge an exploration of the relation of power, ignorance and stupidity,” it is hard not to believe that a radical rethinking of the document, and document-logic, would not benefit from his perspective.
Posted in Edupunk Reading Group, software design, theory | No Comments »
May 4th, 2009 ontoligent
I suppose it is the prerogative of different generations to simultaneously dismiss and retrieve old ideas by introducing new words for them. I have in mind words like “metacognition” and “knowledge management.” In both cases there is an existing word that more or less describes the referent of the new(ish) word: epistemology and education respectively. Both metacognition and epistemology refer to, roughly, the activity of “thinking about thinking,” and the core mission of education is the management of knowledge — producing it, storing it, reproducing it, etc. However, in each case, the intent of the new word is clearly different from the older one, and this difference can be attributed to a different organizational context: knowledge management is about education and research in corporate settings (now defined as “knowledge producers”), as opposed to society or the world at large, while metacognition has flourished within the relatively narrow context of academic departments of education.
But why the complete absence of the old words in the new discourses? Why not call metacognition something like “applied epistemology”? Or knowledge management “corporate education” or “corporate teaching and learning”? It can’t be for lack of familiarity with the older words. Nor can we assume that the newer words are more “sticky” and easier to use; that just begs the question. I think it’s clear that the problem with these constructions is their connotations: they carry too much semantic baggage.
But now here’s the thing: the new words do not simply stand alongside the old ones, they actually seem to take their places. The new words take the place of the old words at an abstract level-but then replace the implicit social meanings in the process. The effect is to implicitly usher in newer or different institutions in the space reserved for the old. So knoweldge management is about education, yes, but education in a business setting where knowledge is viewed as a competitive advantage, not a general good for the betterent of humankind. And, eventually, this will have implications for education itself, as essays like “Applying Corporate Knowledge Management Practices in Higher Education” become more common.
Similarly, metacognition is about epistemology, but not as an abstract philosophical concern, nor one tied to the remote activity of a purely scientific enterprise as it once was; it is epistemology in the service of classroom teaching and learning, where the users of the word no doubt think it belongs. So the effect of the word “metacognition” is to usher out the ivory tower and to replace it with the more populist institution of the classroom. And this meaning is consistent with the current ethos of educational populism, as expressed in wider ideas like connectivism.
So language really does embody the social: these words are actually the encodings and amplifiers of social changes happening right now. Perhaps those of us familiar with the older names of things would do well to note these shifts and pay attention to their institutional commitments.
Posted in theory | No Comments »