March 13th, 2008 ontoligent
Almost as absurdly popular as the concept of Web 2.0 itself is the idea that the term can’t be defined. After listening to several EDUCASE podcasts from the ELI Conference in San Antonio this January, I am struck by how frequently folks remark on the inherent undefinability of the word. It’s almost as if this fuzziness is part of the Web 2.0 meme itself, an auxiliary meme designed to inoculate the idea from being dismissed out of hand for being too simple-or contradictory to the spirit of Web 2.0 itself. After all, a big part of the idea is that it is more social than cognitive, and to many, social means fuzzy. So, at the risk committing analytical murder on the idea, here is a definition:
Web 2.0 refers to (1) a wildly successful set of technologies that radically lowered the bar to creating content on the web (blogs, wikis, RSS, etc.) which tended to vastly increase the number of web reader-writers, (2) another set of wildly successful technologies that took advantage of the network effects of the increase of web reader-writers (Google anything, Wikipedia, de.licio.us, etc.) and generated value out of exposing the massive increase in web content and participation back to the client, and (3) to the emergent devices, genres, and structures of participation that resulted from this feedback loop (tag clouds, social bookmarks, feeds, etc.)
If looked at from a purely technical perspective, the addition of through-the-web editing and ready made information architectures (how blogs and wikis improved on building homepages with Netscape Composer) appear as differences in degree, not in kind. Same with the additions of smooth DHTML and AJAX to improve the user experience in working with web applications. But the net effect (pun perhaps intended) of these changes in degree was to sharply increase read-write “prosumption” on the web beyond some critical threshold where the emergent properties of the system changed in kind. At some point, folks began to think of what can be done with the web differently, both content prosumers and web application developers and companies. They began to get something of what Sir TBL intended when he created HTML and HTTP almost two decades ago. And that is Web 2.0.
Web 3.0 on the other hand has the other problem-all definitions and little fuzziness to show for it. I have hopes for it, though. If the threshold for creating structured micro-content can be lowered-by being built into our through-the-web authoring tools, as easily as tools for creating tags or trackbacks are, then search engines, aggregating tools, and network effectors will begin to seek out, privilege, and select for that kind of content. The pressure will then be on for prosumers to shape up their content for the Engines; distribution will pull production in its considerable draft. It’s being called SWEO and will hopefully have the same effect that SEO (Search Egine Optimization) has had on content production already.
Let me conclude by attempting to link Webs 1, 2, and 3 into something that unifies them beyond a technology trail that begins with HTML/HTTP 1.0.
In the beginning was the World Wide Hypertext. In this Web, all pages were in principle connected by a World Graph of links, but in reality they were not. Instead, the Web was comprised of some very large and dense graphs loosely connected at the edges, and there existed many structural holes and isolated islands of content. Then came the Search Engines, and Google in particular, to link everything together. Google itself became the de facto Central Node of a vast network, joining the separate graphs into One. Then came the Engines of Content-the blogs, the wikis-along with their built-in Linking Engines-the syndicators and aggregators-to create a Web of self-linking content. So the World Graph became effectively a reality. But the WG lacked structure-what some ventured to call “meaning”-a layer of mark-up that would make searching and using this vast sphere of Content both easier and more interesting. And, by virtue of Standardized Ontologies, it would convert the randomness of the crowd into something coherent. And so the Semantic Web was created.
Or something like that.
Posted in theory | No Comments »
March 11th, 2008 ontoligent
It is striking to see how almost all of the key cultural practices that have formed around the Web, and particularly Web 2.0 — prosumerism, the long tail, the Open Source development model, mashups, remixing, etc. — can be analyzed and understood in terms of a political economic framework.
The basis of this framework was spelled out by Marx in the Grundrisse, which he wrote between 1857 and ‘61 but which was only published in 1941 (check it out here). In that work, Marx outlined in broad strokes the basic structure of the capitalist mode of production, which he described as belonging to phases of a single process: (1) production, (2) distribution, (2) exchange, and (4) consumption. Although these concepts were not invented by Marx, he was apparently the first to regard them as a single system that had the capacity to reproduce itself, and therefore persist for long periods of time as a kind of vast machine or organism of people and things, tending toward change only through a gradual process punctuated by revolutions at critical points.
Writing before the invention of cybernetics and general systems theory, Marx had to rely on a more poetic framework to describe this process of change — the dialectical theory of history, famously adapted and inverted from Hegel — which has led many to dismiss (or embrace) his sociology as a product of Nineteenth-century romanticism. Indeed, the lack of a better framework within which to develop this idea may be responsible for the ill-fated revolutionism that become of Marx in the hands of his Twentieth-century admirers, the emergence of which Mark himself helped to create with his rhetoric. But his basic idea remains entirely consistent with the more advanced concepts of boundary conditions, positive and negative feedback, and the like.
The innovation that allowed Marx to conceive of the four-part cycle as a reproductive system was to join the ends of the system together. Rather than viewing them as a stack, with production at the bottom and consumption at the top, each operating independently of the other, and consumption acting as a kind of terminal “sink” in a circuit, Marx invented, or at least strongly appropriated, the ideas of “productive consumption” and “consumptive production.” Productive consumption is consumption that takes place at the point of production and consumptive production is production that takes place at the point of consumption.
Now, the first concept seems easy enough to grasp — basically, to run, say, a factory, you have to provide that factory with goods — food, fuel, equipment, supplies — to keep it running. Goods are not produced ex nihilo, or from materials that arise solely from nature and then get channeled into the economy. So production requires consumption.
The second concept is more complicated, and it is also the one that sheds light on Web 2.0. At this juncture of the process, the act of consumption produces . . . what? Well, for one thing it produces is people, the source of labor and what Marx called “variable capital.” Eating, the quintessential act of consumption, literally reproduces our bodies. If you consider the social dimension of eating — whom you eat with, when and why -you can see that eating also reproduces social forms, such as the family and social networks based on friendship, etc. Beyond that, the consumption of people (through sex and marriage) reproduces social relations, by virtue of class patterns of mate selection, a fact that has obvious significance in kinship based societies. But even more — and this is what has been picked up by folks like C.A. Gregory, Mary Douglas, and Pierre Bourdieu — productive consumption reproduces cultural forms and habits of thought through a complex articulation of taste formation and expression as we purchase and use a variety of commodities that participate in a symbolic dimension as dense and socially effective as it is usually invisible to the consumer. As consumer ethnographers have been telling us for some time, shopping is an act of identity formation, and that is only the tip of the iceberg.
So, what does this have to do with the Web? It illuminates the significance of the concepts of prosumer and remix culture. A prosumer is of course a consumer and a producer of something — podcasts, YouTube videos, blog entries. Remixing is the act of a consumer who produces new products out of the fragments of products he or she has consumed.
[MORE TO COME]
Posted in theory | 1 Comment »