Here’s a double dose of dystopian cheer to accompany a warm and sunny Monday afternoon (or at least that’s the weather here in Central Texas).
First, Adam Kirsch, writing for The New Republic, in a piece dated May 2:
Everyone who ever swore to cling to typewriters, record players, and letters now uses word processors, iPods, and e-mail. There is no room for Bartlebys in the twenty-first century, and if a few still exist they are scorned. (Bartleby himself was scorned, which was the whole point of his preferring not to.) Extend this logic from physical technology to intellectual technology, and it seems almost like common sense to say that if we are not all digital humanists now, we will be in a few years. As the authors of Digital_Humanities write, with perfect confidence in the inexorability — and the desirability — of their goals, “the 8-page essay and the 25-page research paper will have to make room for the game design, the multi-player narrative, the video mash-up, the online exhibit and other new forms and formats as pedagogical exercises.”
. . . The best thing that the humanities could do at this moment, then, is not to embrace the momentum of the digital, the tech tsunami, but to resist it and to critique it. This is not Luddism; it is intellectual responsibility. Is it actually true that reading online is an adequate substitute for reading on paper? If not, perhaps we should not be concentrating on digitizing our books but on preserving and circulating them more effectively. Are images able to do the work of a complex discourse? If not, and reasoning is irreducibly linguistic, then it would be a grave mistake to move writing away from the center of a humanities education.
. . . The posture of skepticism is a wearisome one for the humanities, now perhaps more than ever, when technology is so confident and culture is so self-suspicious. It is no wonder that some humanists are tempted to throw off the traditional burden and infuse the humanities with the material resources and the militant confidence of the digital. The danger is that they will wake up one morning to find that they have sold their birthright for a mess of apps.
Second, Will Self, writing for The Guardian, in a piece also dated May 2:
The literary novel as an art work and a narrative art form central to our culture is indeed dying before our eyes. Let me refine my terms: I do not mean narrative prose fiction tout court is dying — the kidult boywizardsroman and the soft sadomasochistic porn fantasy are clearly in rude good health. And nor do I mean that serious novels will either cease to be written or read. But what is already no longer the case is the situation that obtained when I was a young man. In the early 1980s, and I would argue throughout the second half of the last century, the literary novel was perceived to be the prince of art forms, the cultural capstone and the apogee of creative endeavour. The capability words have when arranged sequentially to both mimic the free flow of human thought and investigate the physical expressions and interactions of thinking subjects; the way they may be shaped into a believable simulacrum of either the commonsensical world, or any number of invented ones; and the capability of the extended prose form itself, which, unlike any other art form, is able to enact self-analysis, to describe other aesthetic modes and even mimic them. All this led to a general acknowledgment: the novel was the true Wagnerian Gesamtkunstwerk.
. . . [T]he advent of digital media is not simply destructive of the codex, but of the Gutenberg mind itself. There is one question alone that you must ask yourself in order to establish whether the serious novel will still retain cultural primacy and centrality in another 20 years. This is the question: if you accept that by then the vast majority of text will be read in digital form on devices linked to the web, do you also believe that those readers will voluntarily choose to disable that connectivity? If your answer to this is no, then the death of the novel is sealed out of your own mouth.
. . . I believe the serious novel will continue to be written and read, but it will be an art form on a par with easel painting or classical music: confined to a defined social and demographic group, requiring a degree of subsidy, a subject for historical scholarship rather than public discourse. . . . I’ve no intention of writing fictions in the form of tweets or text messages — nor do I see my future in computer-games design. My apprenticeship as a novelist has lasted a long time now, and I still cherish hopes of eventually qualifying. Besides, as the possessor of a Gutenberg mind, it is quite impossible for me to foretell what the new dominant narrative art form will be — if, that is, there is to be one at all.
Image: Painting: John White Alexander (1856–1915); Photo: Andreas Praefcke (Own work (own photograph)) [Public domain or Public domain], via Wikimedia Commons
In my recent post about Jeff Kripal’s article “Visions of the Impossible,” I mentioned that biologist and hardcore skeptical materialist Jerry Coyne published a scathing response to Jeff’s argument soon after it appeared. For those who would like to keep up with the conversation, here’s the heart of Coyne’s response (which, in its full version, shows him offering several direct responses to several long passages that he quotes from Jeff’s piece):
For some reason the Chronicle of Higher Education, a weekly publication that details doings (and available jobs) in American academia, has shown a penchant for bashing science and promoting anti-materialist views. . . . I’m not sure why that is, but I suspect it has something to do with supporting the humanities against the dreaded incursion of science — the bogus disease of “scientism.”
That’s certainly the case with a big new article in the Chronicle, “Visions of the impossible: how ‘fantastic’ stories unlock the nature of consciousness,” by Jeffrey J. Kripal, a professor of religious studies at Rice University in Texas. Given his position, it’s not surprising that Kripal’s piece is an argument about Why There is Something Out There Beyond Science. And although the piece is long, I can summarize its thesis in two sentences (these are my words, not Kripal’s):
“People have had weird experiences, like dreaming in great detail about something happening before it actually does; and because these events can’t be explained by science, the most likely explanation is that they are messages from some non-material realm beyond our ken. If you combine that with science’s complete failure to understand consciousness, we must conclude that naturalism is not sufficient to understand the universe, and that our brains are receiving some sort of ‘transhuman signals.'”
That sounds bizarre, especially for a distinguished periodical, but anti-naturalism seems to be replacing postmodernism as the latest way to bash science in academia.
. . . But our brain is not anything like a radio. The information processed in that organ comes not from a transhuman ether replete with other people’s thoughts, but from signals sent from one neuron to another, ultimately deriving from the effect of our physical environment on our senses. If you cut your optic nerves, you go blind; if you cut the auditory nerves, you become deaf. Without such sensory inputs, whose mechanisms we understand well, we simply don’t get information from the spooky channels promoted by Kripal.
When science manages to find reliable evidence for that kind of clairvoyance, I’ll begin to pay attention. Until then, the idea of our brain as a supernatural radio seems like a kind of twentieth-century alchemy—the resort of those whose will to believe outstrips their respect for the facts.
Full article: “Science Is Being Bashed by Academic Who Should Know Better“
(An aside: Is it just me, or in his second paragraph above does Coyne effectively insult and dismiss the entire field of religious studies and all the people who work in it?)
Jeff responded five days later in a second piece for the Chronicle, where he met Coyne’s criticisms head-on with words like these: Read the rest of this entry
Remember Ray Bradbury’s classic dystopian short story “The Veldt” (excerpted here) with its nightmare vision of a soul-sapping high-technological future where monstrously narcissistic — and, as it turns out, sociopathic and homicidal — children resent even having to tie their own shoes and brush their own teeth, since they’re accustomed to having these things done for them by machines?
Remember Kubrick’s and Clarke’s 2001: A Space Odyssey, where HAL, the super-intelligent AI system that runs the spaceship Discovery, decides to kill the human crew that he has been created to serve, because he has realized/decided that humans are too defective and error-prone to be allowed to jeopardize the mission?
Remember that passage (which I’ve quoted here before) from John David Ebert’s The New Media Invasion in which Ebert identifies the dehumanizing technological trend that’s currently unfolding all around us? Humans, says Ebert, are becoming increasingly superfluous in a culture of technology worship:
Everywhere we look nowadays, we find the same worship of the machine at the expense of the human being, who always comes out of the equation looking like an inconvenient, leftover remainder: instead of librarians to check out your books for you, a machine will do it better; instead of clerks to ring up your groceries for you, a self-checkout will do it better; instead of a real live DJ on the radio, an electronic one will do the job better; instead of a policeman to write you a traffic ticket, a camera (connected to a computer) will do it better. In other words . . . the human being is actually disappearing from his own society, just as the automobile long ago caused him to disappear from the streets of his cities . . . . [O]ur society is increasingly coming to be run and operated by machines instead of people. Machines are making more and more of our decisions for us; soon, they will be making all of them.
Bear all of that in mind, and then read this, which is just the latest in a volley of media reports about the encroaching advent, both rhetorical and factual, of all these things in the real world:
A house that tracks your every movement through your car and automatically heats up before you get home. A toaster that talks to your refrigerator and announces when breakfast is ready through your TV. A toothbrush that tattles on kids by sending a text message to their parents. Exciting or frightening, these connected devices of the futuristic “smart” home may be familiar to fans of science fiction. Now the tech industry is making them a reality.
Mundane physical objects all around us are connecting to networks, communicating with mobile devices and each other to create what’s being called an “Internet of Things,” or IoT. Smart homes are just one segment — cars, clothing, factories and anything else you can imagine will eventually be “smart” as well.
. . . We won’t really know how the technology will change our lives until we get it into the hands of creative developers. “The guys who had been running mobile for 20 years had no idea that some developer was going to take the touchscreen and microphone and some graphical resources and turn a phone into a flute,” [Liat] Ben-Zur [of chipmaker Qualcomm] said.
The same may be true when developers start experimenting with apps for connected home appliances. “Exposing that, how your toothbrush and your water heater and your thermostat . . . are going to interact with you, with your school, that’s what’s next,” said Ben-Zur.
Image courtesy of Victor Habbick / FreeDigitalPhotos.net
It looks like we can forget about “collapse fatigue,” the term — which I just now made up (or maybe not) — for the eventual exhaustion of the doom-and-collapse meme that has been raging its way through our collective public discourse and private psyches for the past decade-plus. I say this based on three recent items that have come to my attention spontaneously, as in, I didn’t go looking for them, but instead found them shoved into my awareness.
ONE: Just a couple of weeks after James Howard Kunstler asked “Are You Crazy to Continue Believing in Collapse?” — and answered, in sum, “No” — we now see that
TWO: a new collapse warning of rather epic proportions and pedigree has begun making its way through the online doom-o-sphere, starting with a piece in The Guardian:
A new study sponsored by Nasa’s Goddard Space Flight Center has highlighted the prospect that global industrial civilisation could collapse in coming decades due to unsustainable resource exploitation and increasingly unequal wealth distribution. Noting that warnings of ‘collapse’ are often seen to be fringe or controversial, the study attempts to make sense of compelling historical data showing that “the process of rise-and-collapse is actually a recurrent cycle found throughout history.” Cases of severe civilisational disruption due to “precipitous collapse – often lasting centuries – have been quite common.”
. . . By investigating the human-nature dynamics of these past cases of collapse, the project identifies the most salient interrelated factors which explain civilisational decline, and which may help determine the risk of collapse today: namely, Population, Climate, Water, Agriculture, and Energy.
These factors can lead to collapse when they converge to generate two crucial social features: “the stretching of resources due to the strain placed on the ecological carrying capacity”; and “the economic stratification of society into Elites [rich] and Masses (or “Commoners”) [poor]” These social phenomena have played “a central role in the character or in the process of the collapse,” in all such cases over “the last five thousand years.”
. . . Modelling a range of different scenarios, Motesharri and his colleagues conclude that under conditions “closely reflecting the reality of the world today . . . we find that collapse is difficult to avoid.”
The study highlights, in a manner reminiscent of dystopian science fiction, the specific way this division into Elites and Masses not only might play out but has played out in the histories of real societies and civilizations: Read the rest of this entry
It’s lovely to see one of my formative philosophical influences, and a man whose dystopian critique of technology is largely unknown to the populace at large these days — although it has deeply influenced such iconic cultural texts as Koyaanisqatsi — getting some mainstream attention (in The Boston Globe, two years ago):
Imagine for a moment that pretty much everything you think about technology is wrong. That the devices you believed are your friends are in fact your enemies. That they are involved in a vast conspiracy to colonize your mind and steal your soul. That their ultimate aim is to turn you into one of them: a machine.
It’s a staple of science fiction plots, and perhaps the fever dream of anyone who’s struggled too long with a crashing computer. But that nightmare vision is also a serious intellectual proposition, the legacy of a French social theorist who argued that the takeover by machines is actually happening, and that it’s much further along than we think. His name was Jacques Ellul, and a small but devoted group of followers consider him a genius.
To celebrate the centenary of his birth, a group of Ellul scholars will be gathering today at a conference to be held at Wheaton College near Chicago. The conference title: “Prophet in the Technological Wilderness.”
Ellul, who died in 1994, was the author of a series of books on the philosophy of technology, beginning with The Technological Society, published in France in 1954 and in English a decade later. His central argument is that we’re mistaken in thinking of technology as simply a bunch of different machines. In truth, Ellul contended, technology should be seen as a unified entity, an overwhelming force that has already escaped our control. That force is turning the world around us into something cold and mechanical, and — whether we realize it or not — transforming human beings along with it.
In an era of rampant technological enthusiasm, this is not a popular message, which is one reason Ellul isn’t well known. It doesn’t help that he refused to offer ready-made solutions for the problems he identified. His followers will tell you that neither of these things mean he wasn’t right; if nothing else, they say, Ellul provides one of the clearest existing analyses of what we’re up against. It’s not his fault it isn’t a pretty picture.
. . . Technology moves forward because we let it, he believed, and we let it because we worship it. “Technology becomes our fate only when we treat it as sacred,” says Darrell J. Fasching, a professor emeritus of religious studies at the University of South Florida. “And we tend to do that a lot.”
. . . “Ellul never opposed all participation in technology,” [says David Gill, founding president of the International Jacques Ellul Society and a professor of ethics at the Gordon-Conwell Theological Seminary]. “He didn’t live in the woods, he lived in a nice house with electric lights. He didn’t drive, but his wife did, and he rode in a car. But he knew how to create limits — he was able to say ‘no’ to technology. So using the Internet isn’t a contradiction. The point is that we have to say that there are limits.”
FULL STORY: “Jacques Ellul, technology doomsayer before his time“
You can take your pick of Cassandras: Michael Crichton, Mary Shelley, whoever made Gattaca. Literature and pop culture never stop obsessing about the bastard spawn of technology and biology, although movies love to have it both ways, wallowing happily in high-tech gadgetry even as they deplore its effects.
Feverish as all this artistic angst is, what’s remarkable is that it barely keeps pace with reality. We are hurtling ever faster toward a point of no return. Consider that, just earlier this year, MIT researchers managed to implant false memories in mice. Or that the now-common procedure of preimplantation genetic diagnosis (PGD) lets would-be parents in fertility treatment test their multiple embryos for defects and discard the embryos they don’t want. One of these days, we may also be able to slow down aging by stopping the degradation of telomeres. (Telomeres are the caps on the ends of chromosomes that keep them from fraying.)
. . . Given how close reality has come to surpassing imagination, what do the Atwoods of the world have to offer? Only what good novelists have always offered: a sense of the tragic, a respect for the power of malevolence, a grasp of how things go awry. In her most recent works, a trilogy in the anti-utopian tradition of Brave New World and 1984 that she began with Oryx and Crake in 2003 and ended this September with MaddAddam, transhumanism meets capitalism. In place of Orwell’s totalitarian state, Atwood gives us an all-powerful genetic-engineering industry. Biotech corporations have superseded governments and turned criminal. Since they are so good at keeping people healthy, they have to come up with new profit centers, so they add viruses to their vitamins.
— Judith Shulevitz, “Margaret Atwood: Our Most Important Prophet of Doom,” The New Republic, September 25, 2013
Also see the September 20 radio interview with Atwood (nearly an hour long, downloadable or streamable) on NPR’s On Point:
Margaret Atwood writes “speculative fiction” — but don’t call it science fiction, she says. It could all happen. And maybe it is. Her latest novel is the culmination of a mind-bending trilogy story of the end of the world that seems all too hideously possible. The world, debauched and wrecked by human over-reach. A designer plague has wiped out almost all of old humanity. Gene-altered pigs and a successor race of leaf-eating humanoids are all over. A new Genesis story is unfolding. For a new world. Up next On Point: novelist Margaret Atwood, and after us.
— “Margaret Atwood Will Make You Afraid of Her Tomorrow,” On Point, NPR, September 20, 2013
In or around June 1995 human character changed again. Or rather, it began to undergo a metamorphosis that is still not complete, but is profound — and troubling, not least because it is hardly noted. When I think about, say, 1995, or whenever the last moment was before most of us were on the internet and had mobile phones, it seems like a hundred years ago.
. . . Previous technologies have expanded communication. But the last round may be contracting it. The eloquence of letters has turned into the unnuanced spareness of texts; the intimacy of phone conversations has turned into the missed signals of mobile phone chat. I think of that lost world, the way we lived before these new networking technologies, as having two poles: solitude and communion. The new chatter puts us somewhere in between, assuaging fears of being alone without risking real connection. It is a shallow between two deep zones, a safe spot between the dangers of contact with ourselves, with others.
I live in the heart of it, and it’s normal to walk through a crowd — on a train, or a group of young people waiting to eat in a restaurant — in which everyone is staring at the tiny screens in their hands. It seems less likely that each of the kids waiting for the table for eight has an urgent matter at hand than that this is the habitual orientation of their consciousness. At times I feel as though I’m in a bad science fiction movie where everyone takes orders from tiny boxes that link them to alien overlords. Which is what corporations are anyway, and mobile phones decoupled from corporations are not exactly common.
. . . A short story that comes back to me over and over again is Kurt Vonnegut’s ‘Harrison Bergeron’, or one small bit of it. Since all men and women aren’t exactly created equal, in this dystopian bit of science fiction a future America makes them equal by force: ballerinas wear weights so they won’t be more graceful than anyone else, and really smart people wear earpieces that produce bursts of noise every few minutes to interrupt their thought processes. They are ‘required by law to wear it at all times. It was tuned to a government transmitter. Every twenty seconds or so, the transmitter would send out some sharp noise to keep people like George from taking unfair advantage of their brains.’ For the smartest person in Vonnegut’s story, the radio transmitter isn’t enough: ‘Instead of a little ear radio for a mental handicap, he wore a tremendous pair of earphones, and spectacles with thick wavy lenses. The spectacles were intended to make him not only half blind, but to give him whanging headaches besides.’
We have all signed up to wear those earpieces, a future form of new media that will chop our consciousnesses into small dice. Google has made real the interruptors that Vonnegut thought of as a fantasy evil for his dystopian 2081.
MORE: “Diary: In the Day of the Postman“