article

FCJ-088 New Maps for Old?: The Cultural Stakes of ‘2.0’

Caroline Bassett
Department of Media and Film/Research Centre for Material Digital Cultures, University of Sussex

Preface: Ubiquity

Ubiquity is a key principle of ‘2.0’, that bundle of technologies, plans, possibilities, industries, codes and practices, architectures, fictions, and factions offered up as a definition of a post-cyberspace (SooJung-Kim Pang, 2007) world. This is information technologies’ second life, sometimes given to us as ‘a whole [new] way of life’, to adapt Raymond Williams’ famous definition of culture (1958/1993), so that it becomes far more than an industrial logic. And ‘2.0’ and ubiquity go together in another way too: The model is everywhere. Writing this paper, for example, I am referring to technical accounts of Web 2.0 and to various specific cultural analyses (see below), but I am also surfing a poster proclaiming a ‘politics 2.0’ and We the Media (Gilmoor, 2004) is open on my desk, inviting consideration of user generated content (UGC) and other actually existing forms of collaborative media production. Simultaneously on-screen I am accessing postings by tactical medium theorists discussing their response to ‘2.0’ (e.g. Lovink) and am also accessing a set of stormy debates on a British list about whether media studies should be abandoned for an all-new ‘2.0 version’ (see the MeCCSA list).[1] I am also reading Charles Stross’s Accelerando, a Science Fiction 2.0 exploring what happens to society when Hans Moravec’s uploaded mind children, here configured as the entertainingly Vile Offspring, clash with more or less embodied-humans within the grounds of a universalised Economy ‘2.0’ (Stross, 2005). Stross’s work is easily read as a fictional/dystopian extrapolation of principles explored in Yochai Benkler’s Coase’s Penguin (Benkler, 2002), itself only one of many popular and business studies based treatises considering real-earth ‘2.0’ information economies. Rheingold’s Smart Mobs (2003), a defining account of connected and pervasive computing, is there too, although retrospectively – and it is now submerged by the slightly later plethora of 2.0-ness. Here is evidence – dispersed, unofficial, partial, chaotic, self-interested, but also apparently compelling, inescapable, pervasive, ubiquitous, of a changing order of information; both a new technical configuration and a change in informational culture.

This paper sets out firstly to explore the relationship between the ‘2.0’ and the artefacts, architectures, and use cultures gathered under its banner. ‘2.0’ is explored as a model with descriptive and performative powers; a model that operates with some force, tending to occlude certain characteristics of contemporary techno-cultural forms and practices whilst foregrounding others, and tending also to produce a particular assessment of past and future convergence trajectories (what is to be corrected, what is to be realized). In the later sections of the paper, other ways of mapping contemporary convergence are explored. Some of these are finally pulled together to form a series of axes of convergence; not offered as a complete model, but as a series of connected lines of inquiry.

Introduction: New Models for Old?

2:0-ness is not something new, but rather a fuller realization of the true potential of the web platform…’ (Tom O’Reilly, 2005, my emphasis)

It has refreshed the tarnished visions of the information revolution, but what is new about the 2.0 model, and what connects it to informational models widely influential in the last century – including early convergence models such as the one developed by Negroponte at MIT in the late 1980s? Is the difference ubiquity itself? Within the highly converged/highly pervasive environments that ‘2.0’ maps, are there still meaningful distinctions to be made between cultural and technological/industrial models of ICTs, or have these simply broken down? Certainly ‘2.0”s broad (territorializing) ambitions seem to resonate with events on the ground. For instance, in an age when technical/industrial modellers are set on mapping the ‘cultural’ uses of the internet ‘scientifically’ (e.g. through semantic mapping) and industrially (with the aim of exploiting the labour of consumption), what are the terrains of a specifically cultural mapping of contemporary techno-cultural developments, and what chances are there that it can produce a genuinely alternative understanding of the general dynamics of the emerging global information and communicational system? More, what might this cultural mapping need to look like, what would it include, at what scale would it operate, and what might it occlude? In sum, given the energetic way that ‘2.0’ has been applied, given the contexts within which it has arisen, and in the particular, given the way that it has been made to do cultural, social, and political, as well as industrial work, is this one model that really does fit all?

‘2.0’ is understood by Tom O’Reilly, who framed the term, as a technical and business corrective to the shortcomings of the early Internet. It points to ways that new and emerging technologies can be exploited in the ‘right’ way to provide for the web ‘a fuller realization of [its] true potential’ (O’Reilly, 2005). The interplay between the 2.0 model (and its performative force) and the technologies it describes is explored further below, but of course there were also earlier models, themselves inter-twined in complex ways with the terrains they mapped. The first of these is more narrowly industrial/technical: In the 1990s, the Internet was contextualized within a series of accounts of technological convergence developed from the 1970s (see de Sola Pool for instance), most famously articulated by Nicholas Negroponte as a set of ‘teething rings’ (see Brand, 1987:10). This model became a standard way of thinking about convergence and is thus one of the models that ‘2.0’ re-thinks – doing so both in relation to new formations on the ground (in other words convergence has not developed as expected) and in relation to its core pre-suppositions. One key difference is this: Negroponte’s model mapped the actual and predicted coming together of industries and objects/apparatus/content (e.g. telecoms and telephones, Hollywood and content), ‘2.0’, by contrast diagnoses/projects/demands a particular relationship between forms of practice and forms of architecture. ‘2.0’, in other words, is based on an understanding of the dynamics of the system (the new media ecology) in use. Taking this distinction or ‘correction’ as a key starting point, the cultural stakes of ‘2.0’ can be opened up through an examination both of the model itself and through a consideration of how the model maps the dynamics of some of the modes of participation new media networks afford – and this does not imply a narrow focus on use practices by established users, which would amount to mapping user activity back onto audience activity, but an exploration of the participatory dynamics of the media system, as a whole. This technically, industrially, and market-driven system, needs to be explored critically and in relation to questions of social power to be viewed in its fullest extent, and that requires consideration of questions of culture, once again taken as ‘a whole way of life’. Not only industrial but also technical models are myopic if they do not do this. Unsurprisingly – but this is surprisingly often ignored – culture is an obligatory passage point in exploring techno-cultural formations.

The question of participation may not have been fore-grounded in Nicholas Negroponte’s convergence rings but it was important to many cultural theorists exploring techno-cultures in the ‘cyberspace’ years of the 1980s (where the locus was largely imaginary) and in the 1990s (within early net culture). In particular, Fredric Jameson’s influential analysis of informational culture and late modernism provides another starting point from which to consider today’s models and what they set out to fulfil or correct (Jameson, 1984/1991). The question of what kind(s) of ‘corrective’ might now be applied to Jameson’s original analysis of informational culture, or whether Jameson’s cultural logics can in turn provide a ‘corrective’ to contemporary mappings, is also taken up in this paper.

A consideration of Negroponte and Jameson’s work on information thus here frames an exploration of ‘2.0’ as a 21st century model of convergence. Of course ‘2.0’ is not the only contemporary model of developments in ICTs emerging post what we might call convergence 1.0. Of particular note here, Henry Jenkins has recently elaborated an influential take on convergence culture from the perspective of (a particular variant of) cultural studies (Jenkins, 2004, 2006). This paper thus engages with four models: with Negroponte and Jameson’s two early mappings, with ‘2.0’ as a technical and business model, and with Henry Jenkins’ account of the ‘cultural logics’ of convergence culture. The latter is interesting because it sets out to redress both 2.0’s perceived neglect of culture and to critique cultural studies’ recalcitrant insistence (at least in some quarters) on retaining its engagement with questions of ideology. That is Jenkins’ sets out to ‘correct’ Jameson (and his view of cultural studies) as much as he wants to correct ‘2.0’ (for its narrow technical focus). Being one of the recalcitrant myself I wish to engage with, but also to diverge from, Jenkins’ account and to develop an alternative reading of informational culture.

The immediately following sections of the paper look briefly at these models, both as they stand alone, and in relation to each other. In the final sections of the paper I propose a new correction. I explore contemporary convergence processes through six axes, each of which focuses on different modes of participation. The intention is to expose a dynamic of expansion/contraction, common to many notions of convergence, implicit in the dialectic of information as control and freedom, and embedded in arguments around activity and recuperation, which is configured in each of the axes explored. In particular I focus on contractions in the system, defining these as moments of reconciliation or alignment where meaning or significance is taken. These axes are not intended as a new model (one that terminally corrects ‘2.0’) but are intended to make a ‘different’ point about ‘the differences that make a difference’ between cultural and business models of contemporary ICT networks.

My intention is to show that cultural theory’s capacity to explore questions of social judgement or cultural critique (to ask how information’s ‘potentials’ might be valued or judged in their cultural specificity) is important. I also want to suggest that the ontological approach under-pinning the latter form of inquiry does not share the teleological approach of much 2.0-speak. Put bluntly, my starting point is that there is no net out there waiting to be ‘fully realized’ once the early models have been corrected and implemented, there is only what is produced through a complex and on-going process: the materialization of a technocultural form in a particular historical context within which earlier models and future predictions also figure. This take on the ‘what will be’ says the coming shape of techno-culture cannot be understood in relation to an already existing ‘ideal’ future system to which contemporary models approximate increasingly closely (when the degree to which they do this often indicates the degree to which they are judged a success). It also changes the status and role of maps and models, when the latter are understood as interventions in the cultural imaginary, as seeking to fix the future before it has been made.

Precursors: Convergence Models and Cyberspace Cultures

‘Convergence 1.0’

In the last decades of the 20th Century a series of influential figures, amongst them Ithiel de Sola Pool (in 1983 in Technologies of Freedom), Nicholas Negroponte of the MIT Media Lab (Brand, 1987), and John Sculley, CEO of Apple Computers, set out to explore/predict the coming together of a series of previously discrete forms, industries, processes, and hardware and software objects. Negroponte’s ‘teething rings’ or Venn diagram model of convergence became the best known of many models mapping convergent processes in the late 1980s and 1990s (see e.g. Yoffie et al. 1997). All described the dynamics of the integration of older media technologies into new informational/communicational forms and contents (through re-mediation or absorption) and predicted the emergence of new (converged) media technologies and contents as part of the same process. All also understood (rightly) that the converging information networks would in the near future occupy a vastly expanded terrain, penetrating far further into spheres of previously unmediated culture than the sum of their constituent parts might suggest. This general model of convergence informed many industry blueprints for future developments in the last decades of the 20th Century. Apple’s Knowledge Navigator vision, drawn up in the 1980s, is an early example of an attempt to think imaginatively about the consequences of convergence, for instance,[2] and is also a model that had a certain performative force within the PC industry, influencing the development of hardware and software projects and products – the ‘integrated application’ concept drew on it as did the Newton PDA, Apple’s early stab at what is now termed mobile and locative computing.

The computer industry explored content/controller convergence via multimedia in the mid 1990s, but the Internet sat behind early convergence models, even before its possibilities were widely understood outside relatively specialist circles. By the late 1990s the Internet was a buzzword in popular culture and convergence had become synonymous with the ‘internet explosion’. From then on the trajectory of convergence was understood to describe that process through which the emerging ‘network of networks’, gathering together the intelligence and control capabilitiesof a myriad computers, accreted to itself older media forms, yoking together their discrete contributions and activities and organizing, amplifying and corralling them into a new totality. Convergence models with networking computing placed at their centre, thus mapped out an existing ICT configuration, pointed to a proximate destination for such a system, and also pointed towards (described) a fully informational system to come.

The trajectory of this model was towards total convergence. If it was increasingly clear that the rings in the Venn diagram Negroponte drew in the late 1980s would not come to overlap completely, the inference made was often that this was due to local difficulties (specific reverse salience issues to be engineered out), bad implementation, inefficiencies in the market (where the re-introduction of distinction/difference or monopolization could block progress or where delays in deregulation/liberalization prevented its full operation). The prioritization or naturalization of the technically-given trajectory of convergence, whose ‘nature’ would be derived above all from the digital ‘substance’ of the new networks was thus used to inform a particular industrial/market direction. This produced a form of convergence between the technological and industrial within these models, so that some convergence maps current at the time elided the technological with the industrial while others skipped over the technological shift entirely, explaining convergence in terms of the fusion of the various industries (‘telecoms’, ‘computing’, print, ‘Hollywood’) that the arriving technical shift would bring about.;

In sum, convergence discourses, leaning on a sense of ontological revelation, by definition entailed a destining of the cultural and the social by the technological, a sense that convergence in one domain, the domain of the technical, would have ‘inevitable’ consequences in others. This is why convergence became something of a ‘dangerous word’ (Silverstone, 1995) for those committed to an analysis of the intersections of information technology and the social world that begins not with what is given, but with what is made between humans and their machines, techno-culturally as it were, and what might therefore be critiqued. This, however, did not mean there wasn’t a certain take-up of the convergence model, with its teleological understanding of information technology, within social scientific and humanities-based accounts of new media culture in the mid 1990s.

Informational Capitalism

A cultural mapping of information expansion, produced more or less simultaneously with the early industrial convergence models, is found in Frederic Jameson’s account of post modernism/late capitalism (Jameson, 1984, 1991), which grapples with the information society analysis developed by Daniel Bell and others in the post 1968 era. In this work Jameson set out to reveal the cultural logics of informational capitalism, engaging with the architectures, films, bodies, and with modes and forms of experience arising in a world re-built at previously unimaginable scales by information technologies. The vastness of architectural structures and technologically defined landscapes Jameson explores become key to a vision of a depthless cultural space. This space, too vast to navigate or measure, provides no place from which to launch older forms of analysis or critique, and renders older forms of ordering such as narrative ineffective. Within it the human body can no longer easily hold itself together, or indeed hold itself apart from what might previously have been presumed to be distinct from it – nature, non-human objects, the organic and in-organic bleed into each other (the allusions to Haraway and Deleuze, pre-dating later cyber-theory where they are widely taken up, are evident here). This account of forces at play in late capital at once stresses the centrifugal and centripetal. If much – community, narrative patterning, the individual’s sense of the self as a unified self, for instance – is forced apart, much is also brought into the same plane through information’s capacity to dematerialize/re-materialize. More, the logics of the disintegration/integration explored in relation to culture and experience are here bound up into a wider dynamic. Jameson argues that late modernism produces a culture that shows itself to us as purely technological but that remains informed by another logic. As he puts it:

[O]ur faulty representations of some immense communicational and computer network are themselves but a distorted figuration of something deeper, namely the whole world system of present day multinational capitalism…(Jameson, 1984: 79)

It is through seeking to understand the cultural forms that this contradiction produces and articulates, that Jameson concludes that cultural logics of the coming information society are characterized by ‘schizophrenia’ – a fragmentation of the self and of language, and by a dis-orientation. The latter is both critical, in the sense that traditional forms of critique are stymied, and ‘real’ in the sense that it is configured in, and materialized through, the forms of material culture (the informational technologies hard and soft, in buildings and on screens, in fictions and experiences) these dynamics produce.

In response to this dis-orientating new world, which articulated its own disguise, Jameson’s call was for exploration: New forms of cognitive mapping and sensory re-orientation capable of rendering this new world known, needed to be developed, he said. Elsewhere Jameson explored a series of tools of figures that might begin to do this. His exploration of ‘dirty realism’, a figure describing the forms of sensory life and action possible within the landscapes produced by highly informated capital, in terms of intensity and by way of an exploration of circuits of appropriation/re-appropriation, is one example of this (see Jameson, 1994; Bassett, 2007b).

Jameson’s and Negroponte’s models of the cyberspace age, strikingly divergent in register, nonetheless map the same period, providing more or less direct contexts for the claims for ‘realization’ or ‘correction’ made by those now developing ‘2.0’ as an industrial model and for those exploring the cultural stakes of contemporary information networks. They may also pre-figure a naturalized set of alignments that become evident in later accounts since centripetal moments of the convergence process are stressed in the technological/industrial account, and centrifugal moments (moments of dissolution or fragmentation) are emphasized in the cultural analysis (although in Jameson’s case this is clearly by no means all that is explored). This emphasis, common in models of convergence at the time (and oddly enough found also in many ‘weak’ accounts of social construction), exposes a presumed division between culture and technology that might be questioned rather than accepted – and in this paper my intention is to disrupt the naturalized vision that says cultural perspectives (cultural theories) on information and society focus on the gaps and the spaces (the increasingly rare moments when the lifeworld disrupts the system or irrupts into it) while technologically/industry led perspectives explore the ways in which information technology increasingly joins itself up.

After ‘Convergence’

‘2.0’ as Post-Cyberspace Manifesto

If Negroponte’s teething rings modelled convergence and the rise of ICTs in last decades of the 20th Century, Web 2.0 sets out to describe and predict development trajectories for contemporary forms of new media and, as noted above, it is intended to describe ways to realize what the earlier project left incomplete. The term itself was formulated by Tim O’Reilly, a net publisher and industry insider with a long track record of engagement in the cultural politics of the new media (via the Electronic Frontier Foundation (EFF) for instance). He defined ‘2.0’ as a set of ‘design patterns and business models for the next generation of software’ (O’Reilly, 2005),and offered up it up as a normative and descriptive model for the development of new tools, products and ICT architectures (Berry, 2007). Developers present at the launch event were invited both to take up ‘2.0’ as a challenge for future development and to start writing to its standards.

The affordances, architectures, protocols, tools, services, and products that make up ‘2.0’ thus both map a new landscape and produce a new model for the evolution of ICTs. In particular ‘2.0’ de-prioritizes the question of (degrees of) device convergence which was a central issue in the earlier industrial models such as Negroponte’s, although traces of this concern are still evident (for instance in debates around the relative merits/industrial strengths of phones over pods as base platforms). Instead ‘2.0’ focuses on architectures and tools enabling new or more advanced forms of participation, naming them as key elements in contemporary ICT networks. And, it explores participation itself as the motive force that may enable the real and/or final fulfilment of these networks. The ‘2.0’ list thus includes software architectures enabling new and extended forms of collaborative production, new forms of interaction and social networking, new forms and extensions of code sharing, and new forms of content handling, across platforms.

The technologies offered up as ‘2.0-inspirational’ in O’Reilly’s list are overwhelmingly those allowing users and producers to navigate, map, make sense of, and contribute to an expanding information ecology in smoother, smarter, and more dynamic and engaged ways (e.g. RFID, tagging, Wiki, Google), or to interact with each other more ‘smartly’ through informated social networks (e.g. Facebook). This list thus recognizes the centrality of user activity to the well-being of (social capitalization of) the network as a whole, and understands this activity as essential to the evolution of what is read as a profoundly collaborative system. These principles inform the choice of web artefacts, services and operations taken to define the new (‘2.0’) over the old (blogs over homepages, folksonomy over taxonomy for instance) in a ‘2.0’ in/out list proffered by O’Reilly, and since then widely adapted and extended. They are elaborated to produce, as a meta-discourse, a demand for open architectures as a general principle. ‘2.0’ then can be summed up as a business manifesto demanding the sustaining of the open architectures deemed to be required for the maintenance and development of the collaborative peer production, and for the development of new tools that will enable the continued evolution of the net as a technical and business proposition; keeping it ‘live’ (non-coincidentally RSS is one of the 2.0-list technologies). Disturbance here is regarded as intrinsic to the developing architecture of the converged media system (indeed in this restricted sense the ‘2.0’ model is rather more informed by first wave cybernetic principals than the earlier model) and the future well-being of the system is contingent on enabling a continued process of interplay between the extended uses the system enables and the exploitation of what is produced or adapted through use; new code, new objects, new practices, new standards, new architectures.

‘2.0’, as a model, is in sympathy with contemporary network dynamics. It is also the case that modelling this kind of activity within Negroponte’s original rings would be almost impossible, not because the rings model was entirely static but because what grows the media system in the early model is the one-dimensional movement of technologies and industries into the central overlapping quadrants, rather than the heterogeneous forms of activity found within the field as a whole and undertaken by many kinds of actors/actants. ‘2.0’ might be an effective corrective in this sense. But is it also a cultural model? Certainly it immediately skipped registers. Spreading from its industrial base it has been widely deployed within cultural analyses – the ‘2.0-theoretical’ adoptions come thick and fast. Four reasons for the allure of ‘2.0’ as a cultural model are briefly set out below. I raise them as provocations:

(i) Processes of the informatization of the lifeworld, to which ICTs are integral, are (continuing) to re-draw boundaries between what have traditionally been understood as the terrains of culture and technology. Jameson’s account of the information society and Negroponte’s mapping of the convergence of the key technologies underpinning a new media landscape were developed in the same time frame but operate in registers that set them far from each other. It was often with real difficulty that cultural theorists, digital artists and informatics professionals of the 1990s recognized that they were talking about the same sets of technologies. The possibility they may be able to use the same frameworks to do so, was remote. Today the languages, at least of computing and culture, have converged somewhat.

(ii) ‘2.0’ appears to plug a real gap in media theory: Contemporary forms of convergence require a cultural studies/media studies model able to move beyond standard political economies of media (for example UGC and the recursive principle of reality TV both blur Hall’s circuit of culture model even within the restricted field of television for which he developed it). ‘2.0’ is easy to deploy as a replacement media model.

(iii) The deployment outlined in (ii) is perhaps the more tempting because ‘2.0’ does move beyond purely technological description/prescription and it may be concluded that on the basis of the meta-call it makes – its support for open architectures as a principle – it is far more than a technical/business model. Certainly it appears to make some ethical assessments of the technologies it maps. Viewing the ‘freedom’ to author ‘co-operatively’ to be good for (the software) business, it describes and valorizes what it understands as an architecture of freedom. This is of course a restricted claim, and within O’Reilly’s manifesto, it largely remains so. However, many later accounts of ‘2.0’ presume that ‘2.0’ take-up in pervasive ICT systems must also be good for other things: freedom of information, democracy, the encouragement of forms of media content that might support the production of a workable public sphere in a democracy, and/or a fifth estate, or the avoidance of entirely surveillant societies, for instance. I return to ethics briefly below but note here that the forms of use and deployment outlined above may be possible in the architectures a ‘2.0’ model might enable and may well be closed down in models designed around closed systems such as the proposed ‘clean slate’ internet. On the other hand they are incidental to the central thrust of O’Reilly’s model itself – which concerns efficient software production and the conditions within which it can prosper [3]– and this in the end is how the system is judged as delivering or not.

(iv) Finally, there is the notion of technology itself as self-corrective: Jameson, as noted above, argued that new forms of navigation, new ways to speak, and new ways to share knowledge were culturally necessary to understand informational culture in the 20th century. His mapping of post-modernity thus ended with a demand for new forms of mapping. Today, the technological developments that ‘2.0’ models are themselves all about mapping, modelling, navigating, about designing forms of interaction enabling participation: This ‘improved’ form of informational culture, we might say, comes with its own Sat Nav, its own on-board navigation system. In this context ‘2.0’, as a model, might lay claim to descriptive neutrality – functioning as an alias or pointer to what is self-contained within the technologies it describes – rather than operating with any performative or shaping force: If code can do our mapping for us, beyond what any industrial/technical or techno-cultural text can offer us, do we need any other map?

With these – disputable – points in mind and in pursuit of cultural distinction in an age where theory itself (a form of mapping) threatens to begin to ‘converge’ with its object, I turn to Henry Jenkins’ consideration of convergence culture, the fourth model to be explored here. The virtue of this account is Jenkins’ adamant assertion that specifically cultural accounts of the new forms of information are (still) necessary and are necessary to build an understanding of the formation of the system as a whole.

Jenkins: Fandom as a general principle?

Henry Jenkins’ vision of convergence culture begins with the assertion that technological accounts of convergence are in the main accounts that are unable and unwilling to grapple with social and cultural questions (Jenkins, 2004, 2006). Jenkins argues convincingly that this produces a degree of blindness, not only to the cultural implications of ICT networks, but to the dynamics of the network as a whole. He thus recognizes that questions of privatization/open computing are crucial to the future shape of this system and stresses that its technological development trajectory (the future flowering of ‘2.0-ness’, or its curtailing) will be decided on the grounds of culture and political economy. Many cultural commentators’ share this view of the importance of the outcome of contemporary intellectual property disputes for the shape of future systems, arguing that the topology of future systems will be the outcome of the tussle between backers of a system based on the advanced actualization of various forms of collective (Levy, 1994) or participatory intelligence and those who back the re-privatization of information networks through the extension of the Intellectual Property Law and its implementation in various hardware and software forms (e.g. trusted computing, clean slate style moves, ad hoc DRMs, or further legislation).

From this starting point, one which amounts to a defence of architectures enabling participation and collaboration, Jenkins extends his earlier reception and fan based explorations of audiences and users (e.g. Jenkins, 1992) and his work on television to map some of the new practices of participation across new media networks of many kinds (e.g. transmedial narratives, new forms of blogging). To these twin ends Jenkins’ account melds political economy, audience studies, and genre analysis to considerable effect,[4] and the model he develops thus lays claim to being more extensive than narrow technical versions of contemporary convergence. In addition, because it can more fully map the dynamics of ICTs and because it can explore questions such as the social significance of contemporary techno-cultural forms (e.g. transmediality) and practices (e.g. social networking), it can potentially facilitate the (re)formulation of a cultural politics/cultural policy.

My own problems with this account begin with the cultural politics it configures. Jenkins’ understanding of the dynamics of contemporary convergence culture is avowedly based on his earlier work on fandom and what is being suggested here is the generalization of the logic of fandom, now designated by Jenkins as the preferred mode of participation in convergence culture. Viewed as a mode of participation that comes of age with the development of a particular media system (a system that demands interaction), as a response to a particular form of production, and as a particular user relation, fandom is configured here both as normative and descriptive. For Jenkins’ it is this mode of engagement that is finally ‘fulfilled’ by the technologies of contemporary convergence – and it is essentially fan studies that he offers as a corrective to earlier ideological mappings of the dynamics of informatization. Fandom, that is, becomes Jenkins’ contemporary info-cultural logic. This produces (naturalizes) a particular set of demands. Firstly it requires that the open system model is broadly defended, and although this may make many kinds of sense, it does not begin and end the discussion. There remain issues about how and why and in what way and in what form it should be defended (as a business model, as an emancipatory model, as a creative model, as an ethical model?). Secondly it argues that a new kind of contract between big media and consumer-citizens-fans based on joint responsibility in a shared, although clearly entirely unevenly controlled (and Jenkins’ does recognise this) media economy, in which we can all participate, needs to be developed. Thirdly, this is explicitly translated into a demand that academics and policy makers abandon the historical distinction made in media/cultural studies/cultural theory between different kinds of cultural producers – the market, the public sector, activists for instance – to work with corporations to shape and forge agendas for forms of participation that satisfy perceived social and cultural needs. The cultural injunction, offered at the level of critique and at the level of policy, but generated by a particular reading of what the forms of information give, is to participate from within, rather than disrupt from without. As Jenkins puts it we should blog not jam.

The mapping of the cultural logic of an informational system made by Jameson and Jenkins’ at specific moments in the history of informational capitalism, each widely designated as moments of the new, may thus be understood in very divergent ways. Unlike the Jameson account, where an ideological reading of the claims of the information age distances (techno)cultural analysis from the technological landscapes it describes, Jenkins’ ‘corrective’ account is determinedly post-ideological and as a consequence both highly functional and highly convergent. In his fan model, participation, engagement, and collaborative work all build the system and are therefore to be promoted. The gulf between Jameson’s sense of the cultural politics of informational capitalism last time around and Jenkins’ account, developed in an era where ‘2.0’ begins to have purchase as a model describing a new system, is wide here. And the key differences concern not so much the evolution of the technologies of which these two theorists write, nor the principle of activity (Jameson’s account, after all, explores the experience of living within the forms of information). What divides these accounts is their critical approach to thinking about the relationship between forms of technology and forms of culture as they play out within a particular historical horizon. At issue is not participation itself but how participation in this informated cultural landscape is understood and judged.

The final section of the paper explores forms or modes of participation within ICT systems across a series of six axes. Put together, they begin to trace an alternative cultural map of the contemporary constellation: an alternative account of what connects, after convergence.

The axis of actors and agents

Participation in contemporary systems is not reducible simply to ‘use’: The system/user model may imply a break with the broadcast/receiver mode typical of older media (retained to some extent even within ‘active audience’ theses), but can simultaneously produce a problematic restatement of the under-pinning binaries text/audience, producer/receiver. These binaries are inadequate because ICT networks increasingly involve actors who do not ‘use’ as earlier audiences used to ‘watch’. As an example we might fly in some pigeons: In the Internet of Things Julian Bleeker explores the changing dynamics of participation and agency emerging as ecological networks delegate forms of action to non-humans agents (Bleeker, 2005). His example is a Beatriz da Costa artwork which equips pigeons[5] with sensors, so that, as they move through the city, they map pollutants in the environment;[6] the ‘they’ here refers to the pigeons (who move) and to the sensors (which/who sense). The pigeons, the sensor technologies, and the information gathered all help to constitute an active network (to stay with Bleeker’s ANT-influenced account), or a mapping system in process. This pigeon system underscores the degree to which the conventional distinctions between the user and the system, to which we have referred, are undermined through the insertion of non-human actors/actants into the media ecology. Participation in this model cannot necessarily be aligned either with human use (for obvious feathery reasons) and nor with use per se if this is distinguished from what is being used (the system). Thus the term ‘user’, which once so usefully took us so beyond the (television age) notion of audience, is here revealed as problematic – and this is one reason why the fan/system model, with its naturalized (and binary) division between (human) user and (machine) system, itself produces problems in mapping systems containing these new forms of delegation. Going beyond ‘use’ allows some new distinctions to emerge between forms of agency. Notably, the human/non-human agency of different actants in hybrid information networks can be understood as at once irrelevant and highly significant – irrelevant in that humans and non-humans can be actants in a system (taking on old user positions), but significant if the intention is to trace out the power dynamics of these systems; if it ‘matters’ how, or with what choice, or with what degree of understanding (or how reflexively) humans might be incorporated into information systems ‘pigeon-style’, providing their ‘incidental’ inputs for free. Bleeker’s analysis, of course, centres on a specific network, but smart non-human agent/actor hybrids increasingly feature in the developing internet of things.

The sensory axis

Bleeker’s/Da Costa’s pigeons, particular kinds of actors, cannot be said to attend to what they are doing in the same way as humans. However, exploring forms of (human) attention might provide insights into sense perception, a mode of participation often neglected in the early web studies, partly because of its relative neglect of the body. In an account of digital identity exploring how individuals feel and act across multiple spaces Helen Kennedy has argued that the pre-occupation with identity slippage/play that marked much 1990s writing on earlier cyberspaces needs to be reassessed (Kennedy, 2006). Asking how the forms of separation and fragmentation that multiple spheres of action produce are overcome, Kennedy argues that it is necessary to de-emphasise performative models of identity which have often stressed fragmentation (e.g. Stone, 1996) and think in terms of a phenomenological focus on embodied forms of ‘feeling and being’. This not only because it is through the body that connections are made and re-made, and through which diverging ‘selves’ are reconciled or brought home, but because it is through the senses that we interact with the increasingly informated environment.

In work on mobiles I have explored similar issues of connectedness considering forms of attention that emerge in mobile phone use in public spaces where, using intimate technologies to re-organize our engagement with the sensory environment, we divide and combine auditory and visual streams in layered and partial ways, so that moving across the re-configured (because informational) city, sensory streams are divided, re-doubled, and overlaid (Bassett, 2003). Viewed through the axis of sense perception and attention, it is clear that participation, involving fracture and division (divided attention), also implies reconciliation. This form of sense-making is sensual and cognitive (through the sensing body we bind up these experiences into particular forms and patterns and make them meaningful in specific ways), operates continuously (e.g. we use it to navigate the city from moment to moment), and also operates in retrospect (to retrospectively apply particular forms of meaning or significance). It is also learned, indeed it is a form of habitus, and as such socially constructed. To explore participation through the axis of sensory engagement opens up rather than closes down questions of social power (e.g. in an urban space what lets particular groups divide their attentions, what forces others to attend only to the present, or sets up a wish to ‘abscond’ as far as possible from a particular place), and in doing so opens space for the development of forms of political contestation.

The third axis is cultural production. Forms of contemporary (digital) content production increasingly involve the recombination of shards of already existing code or content and the use or re-use of shared cultural memes (using the word metaphorically). What is made in this way can be viewed simultaneously or serially as an individual production, as a discrete production with multiple authors, and or as a radically shared production. As the number of these products grows, disentangling their genealogies, and making these kinds of attributions becomes increasingly complex – specifically borrowed chains (of code for instance) are hard to trace while distinguishing between internal components of the production and the tide of opinions, knowledge, content arising within the web, which might contribute in non-specific ways (or which might contribute non-bundled knowledge) to the production of new cultural objects becomes almost impossible.

This does not stop arbitrary divisions being made. Indeed at the moment at which a work becomes a commodity (when its social/cultural capital is translated into economic capital) precisely these kinds of divisions are necessary. At that point not only is the loose ‘network’ view of creative authorship hard to sustain, but a diachronic view of the work itself, one in which the work is understood to contain the archive and the time of the archive that makes it, so that it becomes something across which meaning and significance may emerge laterally rather being defined at a single moment, is also lost.

The market thus performs a work of reconciliation that operates in tension with other more organic processes of reconciliation and expansion that are also characteristic of the formation and reformation of contingent artefacts within web culture. The latter might allow for different forms and modes of production to be recognized as contribution to the formation of a work and might therefore take forward new forms of creative production beyond the traditional commodity.

The Creative Commons copyright/left system is of course an attempt to negotiate precisely this imbroglio (see Berry, 2007). It seeks to define ownership in ways capable of recognizing the complex genealogy of an object (through recording multiple creative inputs) and of rewarding ‘authorship’ (of each recombinant shard) when rewards are in the offing, both in order to allow the continued making of objects and to reward those made. It thus famously ‘reserves some rights’ for the individual while attempting at the same time to recognize the essentially collaborative nature of cultural work and the particular forms of collaboration immaterial work supports (Berry, 2007). Even in the Creative Commons system, the status of a work is thus finally defined in relation to its position vis a vis the market[7] – and even here this is a translation that can produce the disintegration of the work as a legible object or that can exclude forms of contribution that are not measurable in terms of individual ownership: Seriously considering the axis of production in an account of contemporary ICTs provokes questions about the role of the market in defining not only ‘who owns what’ but ‘what kind of participation counts’ as a contribution towards the constitution of creative digital work.

The axis of representation

The fourth axis is representation, and the intention is to focus attention onto what Appadurai called the mediasphere and distinguished somewhat from the supporting technosphere (Appadurai, 1986) in order to question the terms of representation in newly convergent systems. Roger Silverstone’s 2007 account of mediapolis, defined as the cosmopolitan space created through and within global ICTs, through which we encounter the other, is useful here. Arrangements pertaining in this space (e.g. through narrative, genre, image treatment) organize the distance and/or proximity between those taking part in these encounters – which might imply those who authored the spaces take responsibility for the encounter. Silverstone however, argues that in new media ecologies both producers and users have to take responsibility for this encounter: As he sees it, the ubiquity and electivity of media use in an era of pervasive ICT networks, viewed as a key distinction between new media post-convergence systems and old ones, makes this kind of responsibility inescapable. If there are some parallels with this approach and with Jenkins’ sense of ubiquitous participation, there are two key differences. First questions of participation and responsibility are here explored in relation to questions of civil society rather than fandom. Second (following on from this) Silverstone’s intention is to question what might constitute responsible participation. Drawing on Hannah Arendt’s work on civil society (e.g. Arendt, 1998) he defines an ethical space of appearance in today’s mediapolis as one in which an ‘appropriate distance’ between those involved is produced. Thus the ‘encounter’, in and of itself is not enough to constitute a virtue, incidentally, as it were. Here we may find a rejoinder to the claims above that connection, in and of itself, is essentially virtuous, that we cannot not love ‘2.0’ and what it stands for.

The axis of embodiment

Arendt’s work on ethics is closely related to her work on narrative identity – where the narration of a life becomes an act that is performed by another, something that may offer a form of restitution for lives and identities previously denied. The connection between the tale (and even here the focus is on the narrative arrangement rather than the narrative content) and the life is useful in considering our participation as embodied beings in contemporary networked culture.

Processes of fragmentation, multiplication, and dispersal, are characteristics of forms of action, practice, and production, in contemporary techno-cultural systems: We are encouraged to reach out across networks but these moments of expansion are balanced by moments of closure, contraction, reconciliation: when dispersed elements – identities, sense streams, life stories and lives, layers of experience, temporalities, are collapsed back together again. The constant demand is that we both fragment – to play the market, to work more, to consume more, to experience sounds in one place, images in another, to expand our sensory intake – and that we reconcile these identities on demand, that we work the centrifugal and centripetal qualities of contemporary networks. We are neither equal players within the space of appearance, nor do we entirely control the rhythms of convergence and fragmentation that organize our embodied engagement and participation in informational culture. More the planes within which we operate (one form of narrative or another perhaps) are sometimes in tension with one another – something evidenced by moments of somewhat brutal reconciliation, when all the aspects of our multiply engaged selves are brought back together.

Today security systems give us a foretaste of the consummation of this trajectory in everyday life. The increasing stridency of the demand that our database counterparts match up with our (increasingly scanned and sampled) bodies, our credit records with our plane tickets, our passports with our stories, that we stand to attention, get ourselves ‘together’ while presenting ourselves and our documents; all are examples of moments when this kind of coercion is exercised, and offer an indication of the centrality of exploring questions of power and control when investigating the cultural dynamics of these technological systems which change what I would describe as the order of participation (Bassett, 2007). Kate O’Riordan is exploring this trajectory through a consideration of the intersection of bio-tech, bodies, and information, a constellation she describes as the ‘genome incorporated’ (O’Riordan, 2007).

The axis of the imaginary

The axes proposed above explore participation in relation to agency, identity, sense perception, cultural production, representational economies, and embodiment. Of course there are many other axis along which it might be possible to map the dynamics of participation in post-convergence systems (social networking as a mode of participation for instance is gestured towards here, being subsumed into other categories). Here I point to one more, and since it has been central to the article, perhaps it only needs mentioning briefly: The axis of the imaginary might be constituted partly through fictional works (for instance through the science fiction mentioned above), by the many popular takes on post-cyberspace culture, and by web-formed material (blogs etc), but it also includes the models and maps of future systems and taxonomies of the present such as those being discussed here. ‘2.0’ itself is productively considered not only as an industrial model, but as a cultural imaginary, as is Jenkins account of contemporary convergence and Jameson’s take on informational capital.

Part of what is explored in this article is what it means to declare that new maps are necessary. That is, what is at stake in taking a bundle of internet technologies, architectures, products and practices, computing/ICTs, all of which in different ways change the mode of engagement with the system and the forms of interaction with others, with knowledge bodies, and with information systems themselves, and declaring them distinctively different from earlier web forms (different culturally, technically, ontologically)? Claims for innovation impact on forms of thinking about informational culture, and it is useful to be aware of the ideological consequences of refreshing the promise of computing, and of refreshing forms of critical analysis, both in relation to industrial and cultural discourses, here evidenced by the ‘2.0’ map, and in a more disciplinary way, by Jenkins’ intervention. ‘2.0’, at least in so far as it was taken to usher in the new, certainly refreshes ideas about consumer activity (expressed in terms of participation rather than power/resistance) and therefore gives us a new romance of technology. It re-focuses attention onto participation as an intrinsic part of the system, and defines some of the terms of participation. What it does not do is clarify either the cultural specificities or the power geometries of developing new architectures. Nor except in incidental ways does it give us any sense of how to define or understand or develop an ethics or politics of contemporary information.

Conclusion

Above I have explored examples of participation across a series of axes – finding in each case new forms of convergence/divergence. These variously emerge as a negotiation or a struggle over the ownership of what is (jointly) made, over the identity of those who are made in many spaces at once, over representation, over the space of appearance and the construction of distance/proximity in newly cosmopolitanized spaces, over forms of cultural production and engagement, over the ways in which the system itself is imagined.

If there is a characteristic cultural form or mode of engagement to be ‘outed’ in the contemporary informational ecology, one that shows its traces in all the axes I have outlined here, it is beyond fandom. My reading of the contemporary techno-cultural constellation diverges from Henry Jenkins’ whose analysis focuses on participation as integration (‘only blog’). It is also, however, beyond the particular kinds of dis-orientation Jameson described; the bodies that act within new media ecologies are not adequately described by the forms of human fragmentation or ‘schizophrenia’ of which Jameson wrote. Or at least, if these bodies are fragmented, they also continuously re-conformed, sometimes into larger (hybrid) networks, and their actions may also be said to produce new forms of (narrative) life.

The cultural forms and forms of cultural production discerned across these axes also suggest a different negotiation between (structural) complexity and scale and proximate significance and form than that offered in the Jameson model. On the other hand, and here I remain within the Jameson tradition, the cultural stakes of 2.0 are found beyond the model itself – at least in so far as the model is given by the logics not of technology but (informational) capital. My own sense is of a meta-narrative (fragmentation/convergence) given by techno-cultural capitalism, one in which the forms of reconciliation proffered, understood as a series of different scales and through a series of axes, are ultimately non-reconcilable – despite the constant enjoinder that we do just that. For this reason, in the age beyond cyberspace, in the age of ‘2.0’ with all its navigational aids, its folksonomies, its apparent flexibility and freedom, we (still) need to develop new cultural maps and new forms of critical mapping.

Author’s Biography

Caroline Bassett researches and teaches technology and cultural form in the Department of Media and Film at the University of Sussex, where she also Director of the Centre for Material Culture. Her book The Arc and the Machine (MUP, 2007) explores narrative modes of experience in an age of information. She has written widely on gender and digital technology. She is currently completing a book on anti-computing movements.

Notes

[1] Media,Communication and Cultural Studies Association.
[back]

[2] Slightly later predictive accounts of the networked informational future found in business journals, in the technical press, and in ‘boosterish’ magazines such as Wired (but also in ‘critical technical’ or hacker journals such as Mondo 2000), evidenced the same trajectory.
[back]

[3]Alternative business and technical blueprints for future network development do notunderstand open systems and the principle of collaboration as inevitable, oreven as viable. David Clark’s demand for a new ‘clean slate’ internet, where property rights are to be reinstated and protected, provides a different blueprint for the future, and was criticized in some quarters, not only on the grounds that it might block the efficient exploitation of the ICTs but also for its supposedly totalitarian qualities (see I welcome my Internet Overlords,Baard, 2005).
[back]

[4] The US-focussed nature of this account in so far as it pertains to cultural studies is made clear heresince Jenkins claims that this is a new meld: It may seem far less new beyondthe States. The Birmingham tradition for instance and those emerging from ithave arguably made this connection. However Robin Mansell, in New Media andSociety, also argues, from a rather different direction, that political economy has been neglected in accounts of techno-culture (Mansell, 2004).
[back]

[5] The pigeon that blogs (da Costa, http://www.beatrizdacosta.net)
[back]

[6]The argument there does not focus on the x-morphic forms of agency such creatures mightarticulate (see Laurier and Philo,1991), which could also be explored.
[back]

[7]The links between forms of ownership of cultural productions and forms of dispersed identity are interesting to note here – not least because the system Creative Commons systemhas nothing at all to say about ‘moral’ rights of ownership which are based on a recognition of the sense of personal ownership or identification that an author might feel.
[back]

References

Appadurai , Arjun. ‘Disjuncture and Difference in the Global Cultural Economy’, Public Culture 2:2 (1990): 1-24

Arendt, Hannah. The Human Condition (Chicago, Chicago University Press, 1998).

Bassett, Caroline. The Arc and the Machine (Manchester, MUP, 2007).

____. ‘Forms of Reconciliation: On Contemporary Surveillance’, Cultural Studies 21.1, (2007): 82- 94.

____. ‘How many movements?’ in M.Bull and L.Back (eds) The Auditory Cultures Reader (Oxford. Berg, 2003), 343-356.

Baard, Mark. ‘One of the fathers of the internet wants to be a daddy again’, in Wired, (June 29th, 2005).

Bazin, Andre. What is Cinema? (London, University of California Press, 1967).

Benkler, Yochai. ‘Coase’s Penguin, or, Linux and The Nature of the Firm’, Yale Law Journal 112, (2002-03).

Berry, David. A Contribution to a Political Economy of Open Source and Free Culture, in F. McMillan (ed.) New Directions in Copyright Law (London: Edward Elgar, 2007), 193-223.

Bleeker, Julian. Why Things Matter: A manifesto for networked objects – cohabiting with pigeons, arphids and Aibos in the Internet of Things, (2005). http://research.techkwondo.com/files/WhyThingsMatter.pdf

Brand, Stuart. The Media Lab: Inventing the Future at MIT (New York: Viking, 1987).

De Sola Poole, Ithiel. Technologies of Freedom (Mass: Belknap/Harvard, 1983).

Galloway, Alex. ‘Protocol, or, How Control Exists after Decentralization’, Rethinking Marxism 13; 3/4. (2001): 81-88.

Gillmor, Dan. We the Media (Sebastopol CA: O’Reilly Media, 2006).

Hills, Mathew. The Pleasures of Horror (London, Continuum, 2005).

Jenkins, Henry. Convergence Culture: Where Old and New Media Collide (New York: NYU Press, 2006).

____. ‘The cultural logic of media convergence’, International Journal of Cultural Studies. 7.1 (2004): 33-43.

____. Textual Poachers: Television Fans and Participatory Culture (New York: Routledge, 1992).

Jameson, Frederic. ‘Postmodernism, or, The Cultural Logic of Late Capitalism’, New Left Review 146: (July-August, 1984): 52-92.

____. Postmodernism, or The Cultural Logic of Late Capitalism (Duke UP, 1991).

____. Seeds of Time (New York: Columbia University Press, 1994).

Kennedy, Helen. ‘Beyond anonymity, or future directions for internet research’ New Media & Society 8.6 (2006): 859-876.

SooJung-Kim Pang. A (2007). ‘The end of cyberspace and the emerging telecommunications convergence’, paper to Towards a Philosophy of Telecommunications Convergence, Hungarian of Academy of Sciences, Budapest September 21 – 27.

Laurier Eric, Philo Chris. ‘X-morphising: review essay of Bruno Latour’s Aramis, or the Love of Technology’, Environment and Planning A 31.6 (1999): 1047 – 1071.

Lévy, Pierre. Collective Intelligence: Mankind’s Emerging World in Cyberspace (New York. Plenum, 1994).

Mansell, Robin. ‘Political economy, power and new media’ in New Media & Society 6.1 (2004): 96-105.

Moravac, Hans. Mind Children (Oxford: Harvard University Press, 1988).

Negroponte, Nicholas. Being Digital (London: Coronet, 1996).

O’Reilly, Tim. Design Patterns and Business Models for the Next Generation of Software (2005).

Rheingold, Howard. Smart Mobs: the next social revolution (London: Basic Books, 2003).

Silverstone, Roger. ‘Convergence Is a Dangerous Word’, The International Journal of Research into New Media Technologies, 1 (1995): 11-13.

Silverstone, Roger. Media and Morality: on the rise of the Mediapolis (Cambridge: Polity, 2007).

Stone, Roseanne Allequere. The War of Desire and Technology at the Close of the Mechanical Age (Cambridge: MIT, 1996).

Stross, Charles. Accelerando (New York: Ace Books, 2005).

Williams, Raymond. Culture and Society (London: Chatto & Windus, 1958).

Yoffie, David. Competing in the age of digital convergence (Boston, MA: Harvard Business School Press, 1997).

When commenting on this article please include the permalink in your blog post or tweet; http://thirteen.fibreculturejournal.org/fcj-088-new-maps-for-old-the-cultural-stakes-of-2-0/