CompOnt Tenet 3: Human Ontologies are Built out of Symbols
Symbols — from core symbols like the Virgin of Guadalupe to abstract ones like the whiteness of Melville’s whale — fix and generate ontological categories. How and why this happens is a question of deep interest to me, but that it is true seems obvious and well established. Human beings create symbols like plants produce oxygen, and symbol formation is inextricably bound to a defining trait of human beings — rich, discursive, and always already metacognitive language (human beings have always talked about talking, a practice that must be regarded as intrinsic to human language). Linguistics up to now, wedded as it has been to a Chomskyian Cartesianism, has missed this role, although philosophers have not lost sight of it. As Ricoeur wrote, “the symbol gives rise to thought.”
The relationship between language and symbolism is complex and (still) not well understood. My own view is close to that of the (admittedly discredited in its original form) generative semantics school, associated with George Lakoff. I believe that categories and rules are in some way generated by the transduction of meaning that takes place between neural representations of concrete objects. Discursive language — not “deep grammar” — tries to fix these meanings in propositional form, but the symbolic substrate has a dynamic quality, in no small measure do to its adaptive nature in response to what Merleu-Ponty called the “primacy of perception.”
With writing and then printing, and the monopolization of explicit knowledge, in the form of written records, reference works, etc., by governments, universities, etc., the relationship between discursive fixation and embodied symbols becomes tenuous and contested, resulting in a mind/body problem unfamiliar to ritual societies.
In any case, a number of practical observations follow from this tenet, which I will quickly enumerate, and hopefully take up later:
- Human ontologies are not plans.
- Human ontologies are overdetermined. That is, there is always more than one way to express an ontology The fixing of meanings will always fail if the goal is to create non-overlapping, non-redundant descriptions.
- Human ontologies are rhizomic. In their natural form, ontologies are not hierarchical. Rather, the hierarchical representation is one form of serialization that works well because of its analogy to kinship (see Durkheim and Mauss, Primitive Classification).
- Human ontologies are local and situated.
- Human ontologies evolve.
April 25th, 2009 at 10:23 pm
Lot’s of interest…I’ll be ruminating for a while on this, but two things jump out immediately.
First, I’m with you with the idea that creating symbols is intrinsically human. I’ll have to chase down the reference, but I think Gombrich said something very similar about _any_ speech/language being inherently metaphorical. (Also reminds me of what a friend in grad school said about my move from math to Anglo-Saxon literature: “You just went from one indecipherable set of symbols to another”. And now here I am working with RDF and the abstract symbols of URIs!).
Second, in thinking about vocabs/ontologies, I’ve been curious about the generation through wikipedia of the dbpedia ontology vs. the more formal practices of ontology development e.g. in “Semantic Web for the Working Ontologist”. It might be that the creation of the dbpedia ontology from wikipedia (along with the skos subjects generated) is as close as we’ll get to a case study for what you are thinking about? And that makes me wonder, too, about the place that UMBEL would get in this line of thinking.
Either way, I’m looking forward to reading and thinking more on this. I’m thinking that it can be very useful for thinking through how a good semantic web UI/User Experience should work.
April 26th, 2009 at 3:42 am
I approve all of the five points you make at the end of your blogpost. I am no programmer and haven’t really got to know RDF. And actually I don’t know how it is employed today. But it always seemed to me that contemporary ontologies are produced in a top-down-process, by some “experts” of the sphere to be schematized. The outcome may be very helpful but it also is inflexible and non-evolving.
An ontology is semantics and semantics are social and deeply embedded in our practices. I would like to know if there is any really social-semantic approach? An approach where people can easily set relations (there are 3 types of relations: A–>B, B<–B & AB) and tag them as they want, so that an evolving ontology emerges? Like a “folktology” or so?
I don’t think the DBpedia is like that. Is it flexible? Or does it take an evolved ontology from wikipedia and fix it as a standard?
I would be vary happy to hear something about that…
April 28th, 2009 at 1:52 pm
Adrian,
As I understand it, at first DBpedia’s relationships were basically the terms that appear as parts of the infoboxes. Their ontology is based on a clean-up and normalization of those terms (e.g., first_name and given_name normalized to first_name).
And that’s an essential point with RDF. For it to be reusable by many apps, there needs to be some level of standardization, and experts (those with lots of experience in a domain) can help that greatly.
That doesn’t make them inflexible and non-evolving, though. Anyone can produce and publish or use relationships that extend an ontology as needed. open.vocab.org is a great place to do just that.
Patrick