Not the end of the world, says Blair

“The UK is the worst place to grow up in the industrialised world” screamed the headlines, following the publication of UNICEF’s damning report. Added to the coincidence of an almost simultaneous bate of shootings in South London, commentators with their own particular spin on events, and state enforcers with a peculiar grasp on reality, were each whipped into a frenzy. Not only did the Met go on a ridiculous PR offensive, flooding the streets with the boys in blue (after the fact). But the new blue eyed ‘boy’ on the block, David Cameron, revived the old blue rinse chestnut that it was family breakdown that done it.

Meanwhile, the congealing together of unflattering but often unrelated, and sometimes plain dodgy stats on what terrible lives our children lead, found a welcome ‘we told you so’ from children’s rights lobbyists. They were already convinced that abuse and neglect are rife, and that nasty adults just don’t care. But with all the talk about poor parenting that seemed to unite all sides in the debate, poverty itself didn’t get a look in – despite the fact that it was the likes of the Peckham of Damiola Taylor infamy (not Kensington or Hampstead) where the shootings actually happened.

And yet for all the left-liberal bleating about the spectre of moral collapse – well, they didn’t say as much because relativists, unlike their right-leaning fellow travellers on this issue, don’t do morals – it took one Tony Blair to put things in perspective. Recent events he said, in response to an ever-opportunistic Cameron, were ‘not a metaphor for the state of British society’ or for that matter ‘the state of British youth today’.

Of course, this was a bit rich coming from a man who claimed in the Every Child Matters Green Paper that there are children out there whose “lives are filled with risk, fear and danger … from the people closest to them” and that “they are a standing shame to us all”. Indeed, he might have claimed credit for being ahead of the game in turning the tragic and avoidable death of Victoria Climbie at the hands of her carers, into a campaign against the hidden horrors of family life. But to his credit he didn’t do that.

Admittedly he may have blown it again since the first draft of this ‘comment’, as he announced a change in the law on reducing the age for convictions for possession of a firearm. This was just another familiar episode in the gesture politics for which his government is all too well known. But for that one moment at least, the outgoing prime minister kept his head when everyone else in public life seemed to be losing theirs.

Collapse: How Societies Choose to Fail or Survive

Diamond alludes early on in this book to an image of our own collapse – skyscrapers peeking through the canopies of forest cover like the ‘lost’ Maya temples. Our sorry monuments to conspicuous consumption will topple like the magnificent Easter Island statues unless we learn to live within our means.

He goes on to pile other false analogies, one upon another in an effort to demonstrate that we are overstretching natural resources and will inevitably pay the ultimate price. Population explosion, resource depletion, increased susceptibility to climate change; protracted warfare and eventual civil breakdown are all inevitable unless we refrain from testing the limits. The author is at pains to remind us throughout that his is a reasonable thesis and not at all loaded with the assumptions you might expect from his sort: an environmentalist.

But this is nevertheless the work of a modified Malthus – the patron-saint of all that is green – in which population outstrips food supply where certain conditions are met. And, for all Diamond’s allusions to evidence and the authority of his scientific training, he can’t help but insist that these conditions are always threatening to unfold. He attempts to show – and it has to be said, almost reasonably – how a combination of trading relationships, hostile neighbours, climate change, the push/pull of human activity vis-à-vis our fragile host, and ultimately how it responds to crises – determine a society’s chances of ‘surviving’.

His reading of the data, though, is often contradictory and says as much about his own prejudices and the twisted rationales of our peculiarly disenchanted times than the case studies he is supposedly concerned with. For instance, we are told that the Easter Islanders met their swift end due primarily to their isolation from even their closest South Polynesian neighbours. On the other hand, joining hands across the ocean has its drawbacks too. The Vikings in their infamous plunder of overseas territories were wound up into a destructive frenzy that was only brought to a halt by their defeat at Stamford Bridge and their less than welcome encounters with Native Americans on the other side of the North Atlantic.

As if this wasn’t enough, they had already taken just a few decades to render Iceland uninhabitable; before doing much the same to Greenland. Admittedly, Diamond says this had as much to do with their stubborn refusal to do as the Inuits – with whom they shared the latter island – do. They shunned parkas, kayaks and harpoons and refused to eat fish because, our author speculates, their founding father Erik the Red wasn’t keen. Well, that’s multiculturalism for you. For all their misadventure though, the Nordic Greenlanders were well organised, ensuring the transport of seals from the fjords, caribou from the uplands, and livestock across the settlements to ease hardship where it occurred. But this only made things worse in Diamond’s schema as the emerging power politics reached a crescendo, tribes fought over the scraps and deforestation devastated the settlements (trees, or rather their absence, features strongly throughout Collapse).

On the bright side, the Honi and Zuni Indians of the American Southwest steered a middle course in adopting what the author admiringly refers to as the ‘Pueblo solution’. Their descendants ‘survive’ to this day thanks to their local, low impact, self-sustaining ways and, no doubt, a deeply ingrained resignation to subsistence living encouraged by people like Diamond. They were fortunate enough neither to reside on a hopelessly remote island nor to have had ideas above their station conducive to the establishment of a more complex social formation. Put simply: we can’t live with or without each other.

It is striking how a book about once great civilisations can end up endorsing the notion that their end is what counts because it teaches us a certain humility. An alternative reading might flag up the ingenuity and persistence of those who established early human settlements, sometimes in horrendously inhospitable regions, and celebrate what is arguably testament to the achievements of our common humanity.

But Diamond is writing a book very much in keeping with the restricted imagination of our times, forgetting that few of us – thanks to the development of science, technology and the much derided accumulation of wealth – are subject to the ravages of nature anymore. He describes societies at more or less primitive stages in the development of civilisation as we know it today – which, incidentally, though flawed, and criminally uneven, is a notable improvement on all that preceded it.

The problem does not lie, as Diamond would have it, with the success or failure of past societies to adapt to their environments, but with the cultural straitjacket of our own particular conservatism. And this is the only analogy that passes him by:

Icelanders became conditioned by their long history of experience to conclude that, whatever change they tried to make, it was much more likely to make things worse than better…

I, Robot

Director, Alex Proyas; Starring Will Smith, Bridgit Moynahan, Bruce Greenwood; 20th Century Fox, 114 minutes

The ‘I’ in ‘I, Robot’ takes on a whole new significance in Alex Proyas’s disappointing take on Asimov’s classic. It is not so much ‘Turing Test’, speculating on what might happen if the boffins were able to mimic consciousness. Rather we get a dumbed down humanity in which the only thing separating the metal imposters and us is that we can emote.

And that, significantly, is what makes them dangerous. The robots could outsmart us and take over, if we don’t get emotionally intelligent and rein in our rational urges. The message – the inevitability of our downfall as we overstretch ourselves – is no doubt familiar to contemporary audiences, but quite alien to the visionary writing that inspired (for want of a better word) this adaptation.

The cynical cop, played by Will Smith – very much in Men in Black mode – awaits his calling, though wrongly apprehending what he presumes to be a bag-snatching robot in one of the opening, and most engaging, moments in the film. Dr Calvin, rather than the ageing United States Robots figurehead who narrates the book, threatens to becom the obligatory love interest for our cynical anti-hero.

What might have been an intriguing psychological thriller – as the trailers of the interrogation of ‘Sonny’ certainly led me to expect – was in fact something altogether more predictable. Just as it looks as if it might get interesting, the pace picks up and we are treated to special effects set-pieces as chases ensue.

Having said that, there are some wonderfully evocative scenes, reminiscent of Blade Runner, as robots and humans pass each other in the Metropolis without a second glance. The vertical car parking and awe-inspiring wired-up cityscape – not to mention the surprisingly nimble lifelike robots – are worth taking away with you if nothing else.

I, Robot

by Isaac Asimov, published by Collins, 1971

In this collection of short stories written in the 1940s, Asimov explores the human condition and our changing understanding of it, vis-a-vis the robot. Each is linked by the reminisces of Susan Calvin, robo-psychologist with US Robot and Mechanical Men, Inc.

‘Robbie’ is the playmate that causes a mother to worry about her child’s isolation from her peers. ‘Dave’, the asteroid mining multi-robot, is troubled by his personal initiative circuit. ‘Speedy’ is the risk-averse robot collecting selenium on Mercury. ‘Nestor 10’ is uniquely tweaked with only a qualified recognition of the First Law of robotics, that he may not injure a human being. The other robots would instinctively rush in to protect humans exposed to gamma rays, however improbable the potential harm. ‘The Brain’ is only able to create a hyper jump craft because Calvin suppresses the law protecting humanity from its supposed folly.

‘Cutie’ thinks himself a prophet, so unconvinced is he by the notion of his subservience to humankind.

The embrace of risk as a feature of progress is uppermost in what Asimov is doing with these tales. Or perhaps that’s what speaks to the modern reader. The ‘logic’ thing is perhaps one for the sci-fi obsessive. For me, ‘I, Robot’ is a critique of the social pessimism and all pervading anxiety that holds back potentially beneficial advances. A timeless classic, nevertheless with profound insights for our times.

How We Can Save the Planet

We will live in a ‘carbon-literate’ society, where carbon is a parallel currency and carbon credits tradable on ‘cBay’. We will exist within the confines of carbon budgeting, subjecting ourselves to a regime analogous to our present day penchant for calorie counting with weekly visits to Carbon Watchers. Our internal climates, that is, central heating and air conditioning, will be considered so much ‘thermal monotony’.

There will be no supermarkets as they will be deemed extravagances totting up the food miles of those foolhardy enough to desire the energy-guzzling exotic and convenience foods to which we had formerly been accustomed. Indeed, there will be a return to the larder. And if this all gets too much to bare, there are always the eco-helplines helpfully listed at the back of the book. This is a world in which the winners are domestic tourism and bicycle repair shops. This is the future according to Mayer Hillman.

Under headings such as ‘What should scare you most’ or ‘these figures should shock you’ the author berates us for our energy-profligacy. Rising expectations, he makes the equation, inevitably mean continued climate change. It’s as simple as that. We must divorce resource use from illusory notions of wellbeing without delay, if we are not to succumb to the threat posed by what he describes as the single biggest problem facing humanity. But the fiscal route advocated by many of those sympathetic to his cause lacks the ‘moral basis’ or ‘psychological resonance’, he says, to usher in a new Blitz spirit, and the kind of sacrifices we must inevitably endure. The energy embodied in manufacture, transport and retail must come to be seen as a social ill, rather than a by-product of the relentless motion of the wheels of progress, as might once have been the case. We must narrow our ‘spread-out lifestyles’ if we are to effect the necessary change to avert imminent disaster on a global scale.

Hillman tells the reader that Asia’s consumption has tripled since 1970. The developing world’s one third share of the global shop in 1990 is predicted to rise to two thirds by 2050. And China’s growth, standing at an annual rate on average of 8% since 1980, will result in its economy growing four times over within just two decades. Reason to celebrate perhaps? Except this will not herald the kind of world that the author deems equitable; at least, it offends against his tenets of thrifty internationalism and intergenerational leveling. The common-held belief that the generations to follow might expect to be better off than those that preceded them is anathema to all he holds dear. Instead Hillman claims to seek historical redress for a developing world that is both ‘least responsible’ for a world ravaged by climate change, and ‘most vulnerable’ to what this holds for the future.

Surely, you might argue, in a world committed to development, those nations so compromised might be better equipped to cope with, even influence, their fates. But that is to underestimate Hillman’s profound pessimism. He is blind to the past gains and dismissive of the future claims for ‘human ingenuity’. Fortunately, however, for those more optimistic and with an ear to historical precedent, the case against the author – as the figures he himself presents attest – tell a very different story.

The UK, given its now reluctant status as the first offender on the industrial roll call, has contributed 15% of global emissions since 1750. However, in its regrettably sluggish current period, between 1990 and 2000 the UK economy grew by 26%, with energy demand increasing by just 8%. Government too has recently played its part, with the cumulative impact of strict building regulations ensuring that new housing uses up just 60% of the heating energy typical of the existing stock. And, by 2008, new cars are expected to emit a quarter less carbon dioxide than they did in 1995 thanks to a voluntary agreement between manufacturers and the EU.

Other interventions – distorted by the eco-friendly orthodoxy adopted by officialdom, (despite Hillman’s radical pretensions and protestations to the contrary) – have been less welcome. Methane emitted from landfill sites, as the author acknowledges, has been the most fruitful of renewable energy sources to date. This will, nonetheless, be curtailed, consistent with the rationale of EU legislation aimed at reducing organic waste. The decommissioning of nuclear power stations – taken as a given by Hillman, who rejects the notion that they are a good contender for the solving the problem – goes unchallenged despite the author recognizing that they would otherwise have a future of at least 250 years from known reserves of uranium.

The coincidence of the decline of the UK economy – with the consequent rise of transport and domestic use to the top of the energy charts, meaning that today 51% of energy-use is by individuals – and the apocalyptic individualising of the environmental problem – are instructive. As Hillman freely admits, what divides him and his detractors is not the science so much as attitudes to risk and uncertainty. He prefers to dismiss both his critics, and the reader reluctant to follow his lifestyle tips, as guilty of ‘repression, suppression, denial, projection and dissociation’. Overconsumption is the problem and obesity the appropriate metaphor, we are told, for our decadent times. Rational debate and contestation don’t get a look-in. This can only delay things.

So far, says Hillman, all we’ve done is ‘muddle through’. If that’s the case, then we’d be well advised to continue doing just that.

Designer Babies: Myth or Reality

Solent People’s Theatre, Portsmouth. The performance and discussion reviewed took place on 13 March 2004.

Following the performance of Brave New World, a panel assembled to discuss developments allegedly foreseen in Huxley’s dystopian tale, specifically pre-implantation genetic diagnosis (PGD) and the ongoing furore over its use or misuse.

Juliet Tizzard, editor of BioNews, director of Progress, and a keen advocate of genetic science, went head-to-head with Josephine Quintavalle of Comment on Reproductive Ethics (CORE), an outspoken critic of the likes of IVF and cloning. For Tizzard, the state ought to extend access to reproductive technologies – including allowing parents to use the technology have a child who can act as a donor to a sibling with a life-threatening condition. This, instead of ushering in the state-directed cloning of the Hatchery, would promote parental choice.

Quintavalle, not one to understate her case, equated such parents with slave owners. The slave’s child doesn’t exist for its own sake, she said, and nor does a child subject to PGD. Ellie Lee, lecturer in social solicy and author of Abortion, Motherhood and Mental Health, countered that the Human Fertilisation and Embryology Authority (HFEA) has chosen to interpret the children’s best interest narrowly, ignoring reference to the interests of the family as a whole.

Caroline Jones, lecturer in law at University of Southampton, sought greater clarity on the status of embryonic cells, and guidelines on how to regulate disputes if ‘things go wrong’. Yet for Lee, the overriding problem is the increasing preoccupation with parenting, and an eroding of the autonomy of family life. More regulation would only undermine this further.

Whilst there is clear blue water between the positions held by Quintavalle and Tizzard (and by implication, Ellie Lee), most people occupy the agnostic middle ground. This was made clear by a number of contributions from the floor. Perhaps we shouldn’t be rushing ahead. Perhaps we shouldn’t be having the debate at all. Then again, if it were preferable not to have a debilitating condition, surely it would be logically preferable not to bring an affected child into this world. Can we trust the authorities not to go to far?

In an impromptu and perhaps mischievous poll by the chair, Tony Gilland of the Institute of Ideas asked whether we could trust parents themselves. There was a hesitant but majority ‘yes’ vote.

Our Final Century: will the human race survive the twenty-first century?

Martin Rees, Astronomer Royal and former President of the British Association for the Advancement of Science, is (apparently without irony) the latest of science’s discontents to pen a sensationalist collection of alarmist projections.

The recurring themes of risky runaway technology, impending global catastrophe and underlying anti-humanist sentiment suggest a shared cultural template for not so dissenting voices. As with Susan Greenfield’s Tomorrow’s People, Rees engages in reckless scenario-building, moving from one to another with just enough rapidity to give the impression of imminent doom. Surely we should be worried when the Royal Society research professor at Cambridge University, joins the director of the Royal Institution to condemn humanity to its inevitably sorry fate? Each is so apparently eminent after all.

But, like Greenfield, Rees is a crude technological determinist. The internet is charged with creating ‘sharper social segmentation’ and isolation. The communications technologies are inducing panic as apparently evidenced by the anthrax scares in the US, and the UK’s foot and mouth disease epidemic. And, admittedly with some justification this time, Rees argues that new technologies will continue to allow greater ‘leverage’ to those with the darkest of designs, that is, everyone from the Unabomber to Al Qaeda. Yet none of this tells us why acts of nihilistic violence are on the increase, or why society is so atomised and fearful. ‘It is easier to conceive of extra threats than of effective antidotes’ he says. Or explanations.

Rees’s proposed solution to the supposedly escalating threats that society and the planet face is to establish criteria whereby ‘we can rule out catastrophe with a confidence level that reassures us.’ To this end he recommends potentially disastrous experiments be put to public consultation to ensure that any risks entailed fall below what is collectively deemed an ‘unacceptable threshold.’ But the technical exercises advocated by Rees will satisfy no-one, least of all his lay experts. Risk consciousness is all pervasive and not at all susceptible to cost-benefit analysis. The much-heralded democratisation of science, already instituted in the fields of biotech and reproductive technologies, introduces yet another drag factor into these already highly regulated fields of enquiry.

Indeed, such fears, as Rees recognises, tend to limit the scope for scientific endeavour. Economic short-termism and ethical considerations are symptomatic of wider trends, rather than responsible for holding back cutting-edge science, as he claims. The panicky climate in the business world and the rise of the ‘ethics committee’ is testament to our anxious times. The spectre of Monsanto as corporate ogre, and the retreat of an elite feeding off the popular resonance of reactionary lobbyists, means the social and economic potential of GM crops isn’t even debated.

In citing areas of research bereft of ‘compensatory benefit’ outside the lab, Rees suggests a profound decoupling of science and society. The notion of harnessing technical advance to effect social progress is largely absent from this book. Again, as with Tomorrow’s People, people don’t feature much except as grotesques intent on crimes against… Well, humanity (for what that’s worth). Consequently, he finds the notion of anything other than the lightest of humanity’s future footprints unimaginable. From technological to demographic determinism, an urban population explosion in the developing world fills Rees with dread. The global population, he confesses, is likely to fall post-2050 from a peak of 8 billion. Yet, he continues, the planet could sustain up to 10 billion people living in ‘capsule hotels’ Tokyo-style perhaps, on a rice-based subsistence diet.

Rees gets all nostalgic for the 4.5 billion years that preceded us when ‘nothing happened suddenly’. We are the unwelcome ‘unprecedented spasm’ gate-crashing the biodiverse party with our agriculture and incessant radio-noise, hurtling chunks of metal into orbit. He describes the search for alien life, or another ‘blue dot’, as the most significant for science since Darwin. ‘Ever since Copernicus, we have denied ourselves a central location in the universe’, he continues. But in asking, ‘Is their life on Mars?’ Rees evokes Ziggy Stardust’s alienated plea rather than the confident pioneering spirit of the Space Age. But the ‘right stuff’ values of this era aren’t so much ‘antiquated’ as at odds with today’s mission-fatigue. Rees’s concern for humanity’s ‘biological or cultural legacy’ reveals an identity-crisis behind Our Final Century, and perhaps explains its broad appeal. Apparently he doesn’t do much star-gazing anymore. In fact, this is a scientist on the couch – if we are alone in the universe, he imagines, such a counter Copernican revelation would at least ‘boost our cosmic self-esteem’.

As a consequence, Our Final Century, a book about the future, is depressingly unable to imagine a tomorrow that transcends the present in any meaningful way. Rees worries that the relentless accumulation of near negligible risks will eventually result in catastrophic consequences, perhaps within a generation or two. On the strength of the apocalyptic imaginings of two of Britain’s most distinguished scientists, I worry that the ‘cumulative impact’ of their improbable scenarios on rational debate, and already failing nerves, is far more of a threat to our best interests. If, as Rees rightly says, ‘the stakes are high in opening new worlds’ we need to rediscover the confidence that take’s society beyond their generation’s survivalist ethic.

From Dystopia to Myopia: Metropolis to Blade Runner

From Metropolis to Blade Runner, representations of the city often suggest a bleak view of the future. Has the image of the city become more dystopian? Does culture provide us with an imaginary future, or does it presage the way that we will influence the real future? The film session at the Future Vision: Future Cities conference, held at the London School of Economics on 6th December 2003, looked at the changing historic visions of the city using cinematic examples from different periods. 

For Kim Newman, a novelist and film critic who spoke at the event, filmic depictions of the future are very much reflective of their times. Typically they fail at the box office but acquire cult status in retrospect. Their very downbeat projections and dark fantasies are strangely seductive. It is, I think, worth noting that films such as Alphaville, Blade Runner and Dark City (discussed below) adopt film noir-like devices to portray shadowy, brutal streets through which their lone anti-heroes prowl. This perhaps reflects a brooding cynicism pervading contemporary thought on all things urban after Fritz Lang’s classic Metropolis.

H G Wells dismissed Metropolis (1927) as a mix of ‘almost every possible foolishness, cliché, platitude, and muddlement about mechanical progress and progress in general’. The creators of the Blade Runner cityscape, on the other hand, openly acknowledged their heavy debt to Lang’s vision. Was Wells right? Xan Brooks, film editor at The Guardian (London) online, speaking at the event, described the film as a modernist representation of an ordered society, exhibiting the sense that there was a relatively uncontested view of where humanity was heading. Despite the theme of industrial conflict, I would add, there was at least a shared framework of meaning. That is lacking today.

The apparent absence of a futuristic vision on celluloid in the post-war period arguably reflected a deep pessimism in the Western cultural elite with regards ‘progress’. The sci-fi classics of the 50s tended to substitute alien encounter for the ‘red menace’ of the Cold War. In Japan, the cultural impact of Hiroshima and Nagasaki were being exorcised in the incredible guise of Godzilla (1954), described by Stephen Barber (Projected Cities: cinema and urban space, 2002) as a ‘spectacularly mutating form engaged in a direct, irresoluble combat against the surfaces of the city’. Was this just a run of the mill B-movie or was it an early example of the city as polluted landscape? Godzilla, with his ‘radioactive breath’, is the result of American nuclear testing. But the toxic lizard takes on a malignant resonance of its own in its intent on destroying Tokyo. 

There seemed to be an optimistic cultural turn in the 1960s but the ambivalent attitude to technology and notions of progress seemed to persist in a modified form. In Alphaville (1965) for instance, Jean-Luc Godard presents a dystopian nightmare world hostile to individuality, love and self-expression. Godard was apparently thinking of calling it Tarzan versus IBM. The film warns of the ‘computerised horrors of the city’. The hero of the piece seeks his own reality by battling against its cold rationality and artificiality. This privileging of the emotions was a significant departure. After all, Wells’ lambasting of the sentimentality of Metropolis would be inadmissible to the advocates of the counterculture. 

Stanley Kubrick’s adaptation of Anthony Burgess’s 1962 novel, A Clockwork Orange (1971), was filmed as the counterculture gave way to punk nihilism. It unapologetically indulges us in the amorality and brutality of urban thuggery. This film seems to represent a turn away from the concerns of its dystopian predecessors with mechanical progress, the toxic city and counter-cultural idealism. How do we account for this abandon of such grand themes, or the ‘vision thing’? Unlike the earlier films, Kubrick presented not just a bleak depiction of the future, but a near future in which both city and its most marginal inhabitants are utterly degraded. This quintessentially British dystopia of the period (when considered alongside Derek Jarman’s anarchic Jubilee) is worth comparing with the much grander degradation of the screen adaptation of Philip K Dick’s Do androids dream of electric sheep? (1968).

Aldous Huxley dubbed Los Angeles ‘the City of Dreadful Joy’ and in one of his post-Brave New World novels, a ‘ruinous sprawling ossuary’ subject to ‘deforestation, pollution and other acts of ecological imbecility’. In Blade Runner (1982), Ridley Scott added cheap neon, digitalized advertising hoardings and teeming streets to bring this particular LA up to date. According to Xan Brooks, the film presented a post-modern collage as opposed to the ordered cityscape of Metropolis. The old and the new coexist, he said. This is certainly I think, in contrast to Lang’s portrayal of opposing worlds, the elite cityscape against the mechanized workers slaving below.

William Gibson was writing Neuromancer (1984) as Blade Runner opened in cinemas. He claimed not to have seen the film until well into writing his novel. However, each has been credited with initiating the cyberpunk era of science fiction. The introduction of the virtual dystopia to the genre was seemingly grafted onto the themes of urban decay and moral crisis visited in A Clockwork Orange and Blade Runner. It was as if the ‘punk’ had vacated the brutal alleyways of 70s London and the sprawl of LA to stalk cyberspace instead. But how has the dawn of ‘virtual reality’ impacted on the film city of the future?

In Dark City (1998), Alex Proyas presents a stylised metropolis, an ominous and dark dreamscape. Arguably Blade Runner still casts a shadow over these later films. Yet, like Neuromancer before it, Proyas paves the way for The Matrix trilogy in as far as it ‘depicts a world that is illusory and malleable’. For me though, Dark City is a retreat from engagement with the city as a material or social entity. The political industrial dynamic of Metropolis and the gritty urban realism of Blade Runner are shelved. Alphaville may have been anti-rational but it didn’t indulge in the mystical contortions of these films. We may associate the birth of new ageism with the 1960s but only in the 1990s (alongside Lord of the Rings, Harry Potter, et al) was it to really take hold. Why is this? 

The renowned academic Russell Jacoby has said: ‘The world stripped of anticipation turns cold and grey’. In contemporary cinema, fantasy is the antidote. From the late 90s on, there has been a marked retreat into the inner world, into childhood and away from dirty, complicated reality. This is a dramatic break from Lang’s clearly framed if simplistic depiction of the workings of a futuristic city. As a moral tale,Metropolis towers above the relativist creations that followed. Fractured, partial conceptions of the future dominate today. Indeed, Barber has noted the ‘wry abuse of, or oblivion directed at, linear narration’ in contemporary explorations of the urban.

But is this solely a cultural phenomenon? I would argue that, on the contrary, it reflects the loss of the cohering influence of the defining political projects of the 20th century. As ideological and institutional foundations have crumbled, so have our social narratives and their cultural expressions. Unlike the lead in Dark City , we have a diminished sense of self that cripples our potential to shape the world around us. The future is thus narrowed in its conception or emptied of meaning. Lang’s work is arguably impressive today because its breadth and mastery are counter to the low horizons we now set ourselves. In virtually every sphere of life, those bold enough to present ambitious visions of the future are met with cynicism. This amounts to a short-sightedness that denies the creative capacity of human agency. And, if it goes uncorrected, will inhibit our potential to conceive a future worth realising.

Note: Thanks to Sandy Starr for advice and comments.

Tomorrow’s World

Future Vision: Future Cities, London School of Economics

‘One year ago, Tomorrow’s World was cancelled,’ announced Austin Williams, convenor of the one-day conference Future Vision: Future Cities. Indeed, as the knowing laughs from the audience suggested, even though the reference was to the former BBC flagship of TV Science, the implications are wider than that. As one-day conference on attitudes to risk, urban life and the future, came to a close, delegates were left to ponder where we go from here.

Martin Wright, editor in chief, Green Futures, envisioned, well, a green future. The ‘polluting, messy, noisy’ carbon-fuelled age will be replaced by the quiet and clean hydrogen-fuelled technologies, he predicted. Wright described a future characterised by greater ‘connectivity’, not just in information flows, but also with the movement of populations and the transmission of diseases. Such a globalised future would mean a greater dependence on the reserves held by politically unstable states. Thus was his vision of the future undermined by a fear of increasing threats to world peace. His demands for clean technology were as much driven by his belief that terrorists would be put of attacking a benign power source, as they were for the future of the planet.

Kevin McCullagh, director of Foresight, Seymour Powell, described Wright’s depictions of a sustainable future as a ‘barrier to innovation’. Despite this, environmentalist projections have come to the fore as the West has lost its vision of the future. It is telling, he said, that we still refer to Kubrick’s 2001: A Space Odyssey, or the experimental spirit of the Sixties to conjure up a more optimistic take on what could be.

Science writer and former senior manager at the SETI Institute, Greg Klerkx was far more optimistic describing the period from the launch of sputnik (1958) to the Challenger disaster (1986) as the ‘first space age’. Perhaps routine space flights are still a fiction, but satellite communications are with us; extra-terrestrial mining may be a way off, but we can effectively track the use of earthly resources.

Jeremy Newton, chief executive, National Endowment for Science and Technology in the Arts (NESTA), on the other hand, argued that transport policy has been taken over by the heritage industry. Indeed the only transport innovation we allow ourselves is a ‘machine for reversing time’, the celebration of cutting edge 19th Century technology in the form of trams and bicycles! This presentation which challenged the accepted vision of transport was well received for its wit and whimsy. However, Williams questioned whether simply exposing the folly of a reactionary transport strategy really address the societal shift that now views cars as a problem, lauds pedestrian and decries mobility. ‘Saying that we are in favour of better modes of transport is of little impact if we are unable to challenge the climate of opinion that says that walking and cycling are more responsible means of mobility.’

For Claire Fox, director, Institute of Ideas, we have a problem not only with the future, but we are also ‘profoundly alienated from the past’. The perceived side-effects of our former self-indulgences are projected into a future where human intervention can only bring unpredictable, and more importantly, undesirable outcomes.  Opposed to the futurology exhibited by some taking part in the conference, this state of affairs amounts to paralysis, a stifling ‘presentism’.

But for Fox, the adoption of a futuristic outlook would not in itself change things. We can’t design ourselves out of the problem. Innovators are as likely as their ‘eco-worrier’ contemporaries to internalise the gloomy thinking that characterises our times. Ironically, she said, for all their rhetoric about saving the planet, ‘future generations will be ill-served’ by such anti-human apologists.

Whatever your concern, be it the future of technology, of humanity, or of the planet, it is hard to deny that the ‘vision thing’ is conspicuous by its absence, except in vacuous assertions of the need for ‘Blue Sky thinking.’ FV: FC was an important opportunity to explore what is at stake not only for the urban-dwellers of tomorrow, but for those of us in the here and now.

Tomorrow’s People: how 21st century technology is changing the way we think and feel

by Susan Greenfield

In Tomorrow’s People, Greenfield, renowned neuroscientist and director of the Royal Institution, indulges her literary ambitions to create a speculative dystopia owing much to Huxley.

In this updated Brave New World, she imagines a near-future when the likes of genetic modification, nanotechnology and cybernetics conspire to leave us in a ‘passive, sensory-laden state’. Our sci-fi imaginings, says Greenfield, tend to present a high-tech world in which we nevertheless remain human, our essential being unchanged. However, the intrusion of these 21st century sciences will alter our lives beyond recognition.

Increasingly, the physical world will itself become an interface of ‘tangible bits’ where we exchange CVs via the electro-conductive sweat of a handshake, and communicate via the e-broidery of our ‘softwear’. Augmented reality (AR) applications will turn us into cyborgs. Chip-embedded spectacles projecting a superimposed image onto the retina will be used to aid engineering design, for example, pinpointing sections for maintenance or manufacture. Perhaps a little further off, allowing parents to peer into an artificial womb and track their child’s development.

At home, the little ones will play with their ‘smart toys’, that mirror their development, as each grapples with its environment; or amuse themselves assembling a kind of sub-atomic nanotech Lego. Meanwhile, their flexi-operative parents will ‘plug-in-and-play’, their serotonin depleted brains episodically provoked to virtual ‘desk rage’, as performance stats are relayed to the virtual boss. They will socialise ‘remotely’, or be promiscuously lost in virtual sex role-play with a designer partner of their choice. All the while, the Hyperhouse, with its ‘electronic spine’ will teem with smart appliances, activated by bodily sensors adjusting ambience and functionality accordingly.

Beyond the not so private sphere, populations will diverge further as the uneven application of these technologies leads to ‘speciation’. In Greenfield’s most optimistic scenario, there will be no international development as such, but a wiring up of cottage industries, equipping ‘every village with an electronic library’! The spectre of apocalyptic bio- and cyber-terrorism will reign. We will wear air quality monitoring devices, keep the hi-tech equivalent of a gas mask in the bathroom cabinet, and our offices and homes will be equipped with sophisticated air-filtration systems.

It may already be apparent that Tomorrow’s People is ambivalent about the future. It is also profoundly anti-human in outlook. Post 9/11, Greenfield finds it ‘harder to regard the human element as a constant force for good’. But her pessimism seems more deep-rooted. For her, the self is a fragile expressive entity, and ‘the firewall of our sense of individuality’ in increasing danger of being breached – by the collective, ie. other people! In the home of the future, you’ll need a ‘real room’ retreat from the interactive noise, but will equally find the offline experience exposing and disorientating. The desire for real time stimulation will draw us to the sporting arenas and their ‘seething mass of sweaty humanity’, a frightening and distasteful prospect for Greenfield.

Greenfield’s view that ‘human nature’ has changed little since our ancestors got off all fours, bares little scrutiny. If this were the case, these new technologies might indeed by unnerving. However, the shaping of every tool since the carved animal bone has also helped to shape our minds, our environments and our social organisation. Only deaf separatists would claim that cochlear implants erode the identity of its beneficiaries – but surely this example of early cybernetics is just an extension of historical precedent.

Like any good dystopian, Greenfield captures something of our lives today and projects it into the future. We are certainly living increasingly individuated lives, alienated and fearful of each other, but technology is not making us this way. Already, she notes, some of us float in and out of a virtual world, with our hands-free mobiles, oblivious to those around us. But text messaging and virtual-dating, for example, are popular because of their distancing qualities, the antithesis of what communication technologies are ostensibly for. Instead of being understood as a means to mastering our environments, technical advance can take on a threatening mystical quality, and end up mediating our anxieties.

Greenfield is anxious that ‘text-based unambiguous knowledge’ will give way to associative hypertext. But this would be a consequence of the relativisation of knowledge, a cultural phenomenon, not a technological one. Similarly, Greenfield wonders whether ‘science has made us less accountable for our actions’. Simply put, no it hasn’t. This very sense of humans being deeply vulnerable, with little agency (a sense to which she seems to subscribe), is doing this all on its own. Consequently, the erosion of the private sphere has been underway for some time, with the intrusion of the state, not IT, being the primary driver.