Blog Archives

The digital murder of the Gutenberg mind


Here’s a double dose of dystopian cheer to accompany a warm and sunny Monday afternoon (or at least that’s the weather here in Central Texas).

First, Adam Kirsch, writing for The New Republic, in a piece dated May 2:

Everyone who ever swore to cling to typewriters, record players, and letters now uses word processors, iPods, and e-mail. There is no room for Bartlebys in the twenty-first century, and if a few still exist they are scorned. (Bartleby himself was scorned, which was the whole point of his preferring not to.) Extend this logic from physical technology to intellectual technology, and it seems almost like common sense to say that if we are not all digital humanists now, we will be in a few years. As the authors of Digital_Humanities write, with perfect confidence in the inexorability — and the desirability — of their goals, “the 8-page essay and the 25-page research paper will have to make room for the game design, the multi-player narrative, the video mash-up, the online exhibit and other new forms and formats as pedagogical exercises.”

. . . The best thing that the humanities could do at this moment, then, is not to embrace the momentum of the digital, the tech tsunami, but to resist it and to critique it. This is not Luddism; it is intellectual responsibility. Is it actually true that reading online is an adequate substitute for reading on paper? If not, perhaps we should not be concentrating on digitizing our books but on preserving and circulating them more effectively. Are images able to do the work of a complex discourse? If not, and reasoning is irreducibly linguistic, then it would be a grave mistake to move writing away from the center of a humanities education.

. . . The posture of skepticism is a wearisome one for the humanities, now perhaps more than ever, when technology is so confident and culture is so self-suspicious. It is no wonder that some humanists are tempted to throw off the traditional burden and infuse the humanities with the material resources and the militant confidence of the digital. The danger is that they will wake up one morning to find that they have sold their birthright for a mess of apps.

MORE: “The False Promise of the Digital Humanities

Second, Will Self, writing for The Guardian, in a piece also dated May 2:

The literary novel as an art work and a narrative art form central to our culture is indeed dying before our eyes. Let me refine my terms: I do not mean narrative prose fiction tout court is dying — the kidult boywizardsroman and the soft sadomasochistic porn fantasy are clearly in rude good health. And nor do I mean that serious novels will either cease to be written or read. But what is already no longer the case is the situation that obtained when I was a young man. In the early 1980s, and I would argue throughout the second half of the last century, the literary novel was perceived to be the prince of art forms, the cultural capstone and the apogee of creative endeavour. The capability words have when arranged sequentially to both mimic the free flow of human thought and investigate the physical expressions and interactions of thinking subjects; the way they may be shaped into a believable simulacrum of either the commonsensical world, or any number of invented ones; and the capability of the extended prose form itself, which, unlike any other art form, is able to enact self-analysis, to describe other aesthetic modes and even mimic them. All this led to a general acknowledgment: the novel was the true Wagnerian Gesamtkunstwerk.

. . . [T]he advent of digital media is not simply destructive of the codex, but of the Gutenberg mind itself. There is one question alone that you must ask yourself in order to establish whether the serious novel will still retain cultural primacy and centrality in another 20 years. This is the question: if you accept that by then the vast majority of text will be read in digital form on devices linked to the web, do you also believe that those readers will voluntarily choose to disable that connectivity? If your answer to this is no, then the death of the novel is sealed out of your own mouth.

. . . I believe the serious novel will continue to be written and read, but it will be an art form on a par with easel painting or classical music: confined to a defined social and demographic group, requiring a degree of subsidy, a subject for historical scholarship rather than public discourse. . . . I’ve no intention of writing fictions in the form of tweets or text messages — nor do I see my future in computer-games design. My apprenticeship as a novelist has lasted a long time now, and I still cherish hopes of eventually qualifying. Besides, as the possessor of a Gutenberg mind, it is quite impossible for me to foretell what the new dominant narrative art form will be — if, that is, there is to be one at all.

MORE: “The Novel Is Dead (This Time It’s for Real)


Image: Painting: John White Alexander (1856–1915); Photo: Andreas Praefcke (Own work (own photograph)) [Public domain or Public domain], via Wikimedia Commons

Superfluous humans in a world of smart machines


Remember Ray Bradbury’s classic dystopian short story “The Veldt” (excerpted here) with its nightmare vision of a soul-sapping high-technological future where monstrously narcissistic — and, as it turns out, sociopathic and homicidal — children resent even having to tie their own shoes and brush their own teeth, since they’re accustomed to having these things done for them by machines?

Remember Kubrick’s and Clarke’s 2001: A Space Odyssey, where HAL, the super-intelligent AI system that runs the spaceship Discovery, decides to kill the human crew that he has been created to serve, because he has realized/decided that humans are too defective and error-prone to be allowed to jeopardize the mission?

Remember that passage (which I’ve quoted here before) from John David Ebert’s The New Media Invasion in which Ebert identifies the dehumanizing technological trend that’s currently unfolding all around us? Humans, says Ebert, are becoming increasingly superfluous in a culture of technology worship:

Everywhere we look nowadays, we find the same worship of the machine at the expense of the human being, who always comes out of the equation looking like an inconvenient, leftover remainder: instead of librarians to check out your books for you, a machine will do it better; instead of clerks to ring up your groceries for you, a self-checkout will do it better; instead of a real live DJ on the radio, an electronic one will do the job better; instead of a policeman to write you a traffic ticket, a camera (connected to a computer) will do it better. In other words . . . the human being is actually disappearing from his own society, just as the automobile long ago caused him to disappear from the streets of his cities . . . . [O]ur society is increasingly coming to be run and operated by machines instead of people. Machines are making more and more of our decisions for us; soon, they will be making all of them.

Bear all of that in mind, and then read this, which is just the latest in a volley of media reports about the encroaching advent, both rhetorical and factual, of all these things in the real world:

A house that tracks your every movement through your car and automatically heats up before you get home. A toaster that talks to your refrigerator and announces when breakfast is ready through your TV. A toothbrush that tattles on kids by sending a text message to their parents. Exciting or frightening, these connected devices of the futuristic “smart” home may be familiar to fans of science fiction. Now the tech industry is making them a reality.

Mundane physical objects all around us are connecting to networks, communicating with mobile devices and each other to create what’s being called an “Internet of Things,” or IoT. Smart homes are just one segment — cars, clothing, factories and anything else you can imagine will eventually be “smart” as well.

. . . We won’t really know how the technology will change our lives until we get it into the hands of creative developers. “The guys who had been running mobile for 20 years had no idea that some developer was going to take the touchscreen and microphone and some graphical resources and turn a phone into a flute,” [Liat] Ben-Zur [of chipmaker Qualcomm] said.

The same may be true when developers start experimenting with apps for connected home appliances. “Exposing that, how your toothbrush and your water heater and your thermostat . . . are going to interact with you, with your school, that’s what’s next,” said Ben-Zur.

MORE: “The Internet of Things: Helping Smart Devices Talk to Each Other

Image courtesy of Victor Habbick /

Jacques Ellul’s nightmare vision of a technological dystopia


It’s lovely to see one of my formative philosophical influences, and a man whose dystopian critique of technology is largely unknown to the populace at large these days — although it has deeply influenced such iconic cultural texts as Koyaanisqatsi — getting some mainstream attention (in The Boston Globe, two years ago):

Imagine for a moment that pretty much everything you think about technology is wrong. That the devices you believed are your friends are in fact your enemies. That they are involved in a vast conspiracy to colonize your mind and steal your soul. That their ultimate aim is to turn you into one of them: a machine.

It’s a staple of science fiction plots, and perhaps the fever dream of anyone who’s struggled too long with a crashing computer. But that nightmare vision is also a serious intellectual proposition, the legacy of a French social theorist who argued that the takeover by machines is actually happening, and that it’s much further along than we think. His name was Jacques Ellul, and a small but devoted group of followers consider him a genius.

To celebrate the centenary of his birth, a group of Ellul scholars will be gathering today at a conference to be held at Wheaton College near Chicago. The conference title: “Prophet in the Technological Wilderness.”

Ellul, who died in 1994, was the author of a series of books on the philosophy of technology, beginning with The Technological Society, published in France in 1954 and in English a decade later. His central argument is that we’re mistaken in thinking of technology as simply a bunch of different machines. In truth, Ellul contended, technology should be seen as a unified entity, an overwhelming force that has already escaped our control. That force is turning the world around us into something cold and mechanical, and — whether we realize it or not — transforming human beings along with it.

In an era of rampant technological enthusiasm, this is not a popular message, which is one reason Ellul isn’t well known. It doesn’t help that he refused to offer ready-made solutions for the problems he identified. His followers will tell you that neither of these things mean he wasn’t right; if nothing else, they say, Ellul provides one of the clearest existing analyses of what we’re up against. It’s not his fault it isn’t a pretty picture.

. . . Technology moves forward because we let it, he believed, and we let it because we worship it. “Technology becomes our fate only when we treat it as sacred,” says Darrell J. Fasching, a professor emeritus of religious studies at the University of South Florida. “And we tend to do that a lot.”

. . . “Ellul never opposed all participation in technology,” [says David Gill, founding president of the International Jacques Ellul Society and a professor of ethics at the Gordon-Conwell Theological Seminary]. “He didn’t live in the woods, he lived in a nice house with electric lights. He didn’t drive, but his wife did, and he rode in a car. But he knew how to create limits — he was able to say ‘no’ to technology. So using the Internet isn’t a contradiction. The point is that we have to say that there are limits.”

FULL STORY: “Jacques Ellul, technology doomsayer before his time

Recommmended Reading 42

THIS WEEK: A report on the riots in Sweden and what they may portend for affluent liberal-democratic nations that have thought themselves insulated from such crises. Thoughts on how the Internet is using us all. The crumbling facade of mainstream authority and received wisdom in public health pronouncements, along with internal strife in the medical community over how — or whether — to try to explain uncertainty and nuance in medical science to the general public. The survival, and in fact centrality, of philosophy in general and metaphysics in particular amid our age of scientific confusion. An interview with science fiction legend and culture war lightning rod Orson Scott Card on politics, art, and writing. A review of a new book about “deciding to accept the limitations of small-town life in exchange for the privilege of being part of a community,” which may offer “a plausible way out of the postmodern alienation and ironic posturing” that characterizes contemporary views of the good life. A three-minute animation of the main points from Nicholas Carr’s The Shallows: What the Internet Is Doing to Our Brains. Read the rest of this entry

‘Koyaanisqatsi’: A warning not just for America but for China


I first watched the film Koyaanisqatsi as an undergraduate student at Mizzou, in the company of other students, in the context of a student Philosophy Club meeting. And the film flat-out blew my mind and rocked my world. I have no idea if any of the others present at that viewing were as deeply affected as I was, but today, just over two decades later, the film, and also its almost literally divine Philip Glass musical score, remains a touchstone philosophical-cinematic text that continues to act with a transformative tug upon my psyche.

A good deal of the enduring (obsessive) focus here at The Teeming Brain on the dystopian underside and apocalyptic overtones of life here in the postindustrial wonderland of the great American technopoly stems from two sources. One of these is the collective totality of a mini-library of books and films, both fiction and nonfiction, that have powerfully impacted me with their explorations of this heady convergence point of subversive and destabilizing spiritual, psychological, artistic, political, societal, economic, and technological reality. The other is Koyaanisqatsi, standing independently on its own rarefied plane of import. Not coincidentally, several of those books have been cited as direct inspirations by Godfrey Reggio, Koyaanisqatsi‘s director and mastermind.

If you’re unfamiliar with the film, or if perhaps you’re not aware of the fact that you may already be familiar with parts of it — as with (to name just one prominent example) the wonderful use of two pieces of its music during the Dr. Manhattan origin sequence in the Watchmen film a few years ago — here’s Wikipedia’s synopsis, which is excellent:

Koyaanisqatsi, also known as Koyaanisqatsi: Life Out of Balance, is a 1982 film directed by Godfrey Reggio with music composed by Philip Glass and cinematography by Ron Fricke. The film consists primarily of slow motion and time-lapse footage of cities and many natural landscapes across the United States. The visual tone poem contains neither dialogue nor a vocalized narration: its tone is set by the juxtaposition of images and music. Reggio explains the lack of dialogue by stating “it’s not for lack of love of the language that these films have no words. It’s because, from my point of view, our language is in a state of vast humiliation. It no longer describes the world in which we live.” In the Hopi language, the word Koyaanisqatsi means “unbalanced life”. The film is the first in the Qatsi trilogy of films: it is followed by Powaqqatsi (1988) and Naqoyqatsi (2002). The trilogy depicts different aspects of the relationship between humans, nature, and technology. Koyaanisqatsi is the best known of the trilogy and is considered a cult film.

You can also watch the trailer. I mean it seriously. Stop reading and watch this now:

On May 15 The Chronicle of Higher Education published a brief and fascinating essay that brought this all back to mind. In “‘Koyaanisqatsi’ in China,” Jonathan Levine, a freelance journalist and a lecturer in American studies and English at Bejing’s Tsinghua University, explains how a student approached him during his first semester there to ask “if we could watch a movie — something about ‘American culture.'” Levine points out that this request automatically raised an important and difficult question: “If you were given the opportunity of showing some of China’s future leaders one movie that encapsulated the American essence, what would it be?”

He ended up showing them Koyaanisqatsi — “probably not the first movie you would think of,” he quite rightly points out. (“Probably not even in the first 100,” he quite rightly adds.) But the choice was a savvy one. “With no spoken dialogue,” he writes, “Koyaanisqatsi is a difficult film but a universal one, free of the barriers of context and language that inevitably divide native and non-native English speakers. Accompanied by Philip Glass’s powerful, minimalist score, the scenes take viewers on a sensory roller coaster, rollicking through a slide show of human achievement and folly. The film is a tabula rasa, from which viewers can draw their own conclusions.”

Levine’s reflections on the experience for both him and his students indicate that it was an excellent choice for exploring the depths of the film and its meaning for both America and now China, which has been racing for decades to emulate America’s model of material success. He writes, “Though the film was shot entirely in the United States, by an American director, the similarities to modern China are so striking as to be inescapable. The Brutalist architecture of the condemned Pruitt-Igoe housing project, in St. Louis, could have been airlifted from the outskirts of Beijing. The throngs bustling to and fro — the inhabitants of one of China’s manifold concrete jungles. Income inequality, pollution, degradation of public infrastructure, check, check, and check.”

His closing paragraphs draw out the meaning of the film not only for his Chinese audience but for me personally, and in a shockingly direct way that echoes exactly what I have said to myself, minus the specific references to China, as I have lived with this film for the past 20 years:

Rather than being dated, the haunting imagery of Koyaanisqatsi has become more valuable with time. It now demonstrably encapsulates both the United States and China. As you may have already guessed, my aim in showing the movie was not a dry exploration of American culture, but to raise fundamental questions among China’s brightest minds about the direction of their own country. It is not a warning, but more a checkpoint. The Chinese word for America is “Meiguo,” which literally means “beautiful country.”

My goal with Koyaanisqatsi was not to smash this myth, but to remind those who watch the film that America’s road to development and prosperity was not without speed bumps. It was and is riddled with points of tensions, contradictions, and — in short — many things that are not so beautiful. I hope that the movie will not just provide a snapshot of the United States but will cause my students to question their own nation’s model of development. Should China’s highest aspiration be merely a Sinified simulacrum of all things Western? China has embraced the Western paradigm of development, but is there perhaps another way?

In the words of Mark, “What shall it profit a man, if he shall gain the whole world, and lose his own soul?”

To drive home the point, here’s what may be the film’s most haunting passage:

If you haven’t seen Koyaanisqatsi, please consider my heartfelt recommendation that you remedy that lack as soon as possible, because you’re missing out on a work of art that stands as a kind of cinematic Rosetta Stone for decoding and understanding the arc and tenor of the times we live in.

Frankenstein wept: Algorithms unleashed, Matrix rising

The iconic camera eye of HAL in 2001

Here’s British author and journalist Steven Poole, writing for Aeon magazine in an article published just today and titled “Slaves to the Algorithm“:

Our age elevates the precision-tooled power of the algorithm over flawed human judgment. From web search to marketing and stock-trading, and even education and policing, the power of computers that crunch data according to complex sets of if-then rules is promised to make our lives better in every way. Automated retailers will tell you which book you want to read next; dating websites will compute your perfect life-partner; self-driving cars will reduce accidents; crime will be predicted and prevented algorithmically. If only we minimise the input of messy human minds, we can all have better decisions made for us. So runs the hard sell of our current algorithm fetish.

. . . If you are feeling gloomy about the automation of higher education, the death of newspapers, and global warming, you might want to talk to someone — and there’s an algorithm for that, too. A new wave of smartphone apps with eccentric titular orthography (iStress, myinstantCOACH, MoodKit, BreakkUp) promise a psychotherapist in your pocket. Thus far they are not very intelligent, and require the user to do most of the work — though this second drawback could be said of many human counsellors too. Such apps hark back to one of the legendary milestones of ‘artificial intelligence’, the 1960s computer program called ELIZA. That system featured a mode in which it emulated Rogerian psychotherapy, responding to the user’s typed conversation with requests for amplification (‘Why do you say that?’) and picking up — with its ‘natural-language processing’ skills — on certain key words from the input. Rudimentary as it is, ELIZA can still seem spookily human. Its modern smartphone successors might be diverting, but this field presents an interesting challenge in the sense that, the more sophisticated it gets, the more potential for harm there will be. One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.

What lies behind our current rush to automate everything we can imagine? Perhaps it is an idea that has leaked out into the general culture from cognitive science and psychology over the past half-century — that our brains are imperfect computers. If so, surely replacing them with actual computers can have nothing but benefits. Yet even in fields where the algorithm’s job is a relatively pure exercise in number-crunching, things can go alarmingly wrong.

Here’s author and cultural critic John David Ebert, writing in The New Media Invasion: Digital Technologies and the World They Unmake (2011):

Everywhere we look nowadays, we find the same worship of the machine at the expense of the human being, who always comes out of the equation looking like an inconvenient, leftover remainder: instead of librarians to check out your books for you, a machine will do it better; instead of clerks to ring up your groceries for you, a self-checkout will do it better; instead of a real live DJ on the radio, an electronic one will do the job better; instead of a policeman to write you a traffic ticket, a camera (connected to a computer) will do it better. In other words . . . the human being is actually disappearing from his own society, just as the automobile long ago caused him to disappear from the streets of his cities . . . . [O]ur society is increasingly coming to be run and operated by machines instead of people. Machines are making more and more of our decisions for us; soon, they will be making all of them.


Here’s science fiction legend Brian Aldiss, writing in the first chapter of his seminal 1973 study Billion Year Spree: The True History of Science Fiction, titled “The Origin of the Species: Mary Shelley“:

For a thousand people familiar with the story of Victor creating his monster from selected cadaver spares and endowing them with new life, only to shrink back in horror from his own creation, not one will have read Mary Shelley’s original novel. This suggests something of the power of infiltration of this first great myth of the industrial age. [emphasis added]

Here’s literature scholar Christopher Small, writing in his (likewise influential) 1972 book Mary Shelley’s Frankenstein: Tracing the Myth:

The Monster is not a ghost.  He is not a genie or a spirit summoned by magic from the deep; at the same time he issues, like these, from the imagination.  He is manifestly a product, or aspect, of his maker’s psyche:  he is a psychic phenomenon given objective, or ‘actual’ existence.  A Doppelganger of ‘real flesh and blood’ is not unknown, of course, in other fictions, nor is the idea of a man created ‘by other means than Nature has hitherto provided’, the creation of Prometheus being the archetype.  But Frankenstein is ‘the modern Prometheus’:  the profound effect achieved by Mary lay in showing the Monster as the product of modern science; made, not by enchantment, i.e., directly by the unconscious, an ‘imaginary’ being, but through a process of scientific discovery, i.e., the imagination objectified.

Here’s the late, great cultural critic/historian and philosopher Theodore Roszak, writing in his fairly legendary 1973 book Where the Wasteland Ends: Politics and Transcendence in Postindustrial Society:

Long before the demonic possibilities of science had become clear for all to see, it was a Romantic novelist who foresaw the career of Dr. Frankenstein — and so gave us the richest (and darkest) literary myth the culture of science has produced.

Here’s Agent Smith, the artificial intelligence program in charge of keeping order within the simulated human reality of The Matrix (1999), speaking to the captured Morpheus, leader of the resistance movement against the machine civilization that has enslaved humans (in a film released in 1999):

As soon as we started thinking for you it really became our civilization, which is of course what this is all about. Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You’ve had your time. The future is our world, Morpheus. The future is our time.

Here’s Victor Frankenstein in the 1831 edition of Frankenstein, or The Modern Prometheus, lying on his deathbed and lamenting his former obsessive quest to create and “perfect” life, which led not only to his own utter wretchedness and destruction but to that of everybody he loved:

My limbs now tremble, and my eyes swim with the remembrance; but then a resistless and almost frantic impulse urged me forward; I seemed to have lost all soul or sensation but for this one pursuit.

. . . Do you share my madness? Have you drank also of the intoxicating draught? Hear me — let me reveal my tale, and you will dash the cup from your lips!

If one is looking for a guiding thread of supervening meaning or moral insight here, I might be inclined to borrow and recontextualize the words of legendary and visionary music producer Sandy Pearlman — from the liner notes to Blue Öyster Cult’s epic 1988 concept album Imaginos, about the centuries-long efforts of a transcendent pantheon of “Invisibles” to intercede in human history and guide it to a preordained conclusion — by suggesting that this whole situation portends, indicates, and represents “a disease with a long incubation.”


Images: “HAL9000” from 2001: A Space Odyssey by Cryteria (Own work) [CC-BY-3.0 (], via Wikimedia Commons. “Frontispiece to Frankenstein 1831” by Theodore Von Holst (1810-1844) (Tate Britain. Private collection, Bath.) [Public domain], via Wikimedia Commons.

Recommended Reading 40

In this installment: A report on the new type of futurism that’s being spearheaded by highly regarded scientists and scholars for the purpose of studying the reality and scope of existential threats to human survival. The triumph of fear as a central motivating reality in contemporary geopolitics. The global plague of feral pigs. Renowned author George Saunders on what the Internet is doing to his brain. How writers pursue their passions for other activities as a means of inflaming and enriching their creative authorial inspiration. Why the real-world “bestiary” of extraordinary life forms on earth rivals or exceeds the wildest imaginings of fantastists. The Gothic as a “sublime contagion” compelling us to explore boundaries and transgression. Read the rest of this entry

Downgrading humans in the age of robots

From a recent essay by University of Toronto philosophy professor Mark Kingwell, writing for The Chronicle of Higher Education about “the dream-logic of all technology, namely that it should make our lives easier and more fun,” and the dark side of the age-old science fictional — and now increasingly science factual — vision of creating a “robot working class” that will free humans from unwanted labor:

We are no longer owners and workers, in short; we are, instead, voracious and mostly quite happy producers and consumers of images. Nowadays, the images are mostly of ourselves, circulated in an apparently endless frenzy of narcissistic exhibitionism and equally narcissistic voyeurism: my looking at your online images and personal details, consuming them, is somehow still about me. [Guy] Debord was prescient [in his 1967 book The Society of the Spectacle] about the role that technology would play in this general social movement. “Just when the mass of commodities slides toward puerility, the puerile itself becomes a special commodity; this is epitomized by the gadget. . . . Reified man advertises the proof of his intimacy with the commodity. The fetishism of commodities reaches moments of fervent exaltation similar to the ecstasies of the convulsions and miracles of the old religious fetishism. The only use which remains here is the fundamental use of submission.”

It strikes me that this passage, with the possible exception of the last sentence, could have been plausibly recited by Steve Jobs at an Apple product unveiling. For Debord, the gadget, like the commodity more generally, is not a thing; it is a relation. As with all the technologies associated with the spectacle, it closes down human possibility under the guise of expanding it; it makes us less able to form real connections, to go off the grid of produced and consumed leisure time, and to find the drifting, endlessly recombining idler that might still lie within us. There is no salvation from the baseline responsibility of being here in the first place to be found in machines.

. . . To use a good example of critical consciousness emerging from within the production cycles of the culture industry, consider the Axiom, the passenger spaceship that figures in the 2008 animated film WALL-E. Here, robot labor has proved so successful, and so nonthreatening, that the human masters have been freed to indulge in nonstop indulgence of their desires. As a result, they have over generations grown morbidly obese, addicted to soft drinks and video games, their bones liquefied in the ship’s microgravity conditions. They exist, but they cannot be said to live.

The gravest danger of offloading work is not a robot uprising but a human downgrading. Work hones skills, challenges cognition, and, at its best, serves noble ends. It also makes the experience of genuine idling, in contrast to frenzied leisure time, even more valuable. Here, with only our own ends and desires to contemplate — what shall we do with this free time? — we come face to face with life’s ultimate question. To ask what is worth doing when nobody is telling us what to do, to wonder about how to spend our time, is to ask why are we here in the first place.

— “Mark Kingwell, “The Barbed Gift of Leisure,” The Chronicle of Higher Education, March 25, 2013

Also see John David Ebert in 2011’s The New Media Invasion: Digital Technologies and the World They Unmake, from the chapter titled “Robots, Drones and the Disappearance of the Human Being”:

[The rise of drones in warfare] is a syndrome of thought that is currently infecting American society as a whole and is eating away at it like a cancer. Everywhere we look nowadays, we find the same worship of the machine at the expense of the human being, who always comes out of the equation looking like an inconvenient, leftover remainder: instead of librarians to check out your books for you, a machine will do it better; instead of clerks to ring up your groceries for you, a self-checkout will do it better; instead of a real live DJ on the radio, an electronic one will do the job better; instead of a policeman to write you a traffic ticket, a camera (connected to a computer) will do it better. In other words. . . the human being is actually disappearing from his own society, just as the automobile long ago caused him to disappear from the streets of his cities. . . . [O]ur society is increasingly coming to be run and operated by machines instead of people. Machines are making more and more of our decisions for us; soon, they will be making all of them.

Our technological future according to Idiocracy:

Recommended Reading 38

Mexican Cartels Dispatch Trusted Agents to Live Deep Inside United States
The Washington Post (Associated Press), April 1, 2013

Mexican drug cartels whose operatives once rarely ventured beyond the U.S. border are dispatching some of their most trusted agents to live and work deep inside the United States — an emboldened presence that experts believe is meant to tighten their grip on the world’s most lucrative narcotics market and maximize profits. . . . [A] wide-ranging Associated Press review of federal court cases and government drug-enforcement data, plus interviews with many top law enforcement officials, indicate the groups have begun deploying agents from their inner circles to the U.S. Cartel operatives are suspected of running drug-distribution networks in at least nine non-border states, often in middle-class suburbs in the Midwest, South and Northeast. “It’s probably the most serious threat the United States has faced from organized crime,” said Jack Riley, head of the Drug Enforcement Administration’s Chicago office.

. . . . Years ago, Mexico faced the same problem — of then-nascent cartels expanding their power — “and didn’t nip the problem in the bud,” said Jack Killorin, head of an anti-trafficking program in Atlanta for the Office of National Drug Control Policy. “And see where they are now.” Riley sounds a similar alarm: “People think, ‘The border’s 1,700 miles away. This isn’t our problem.’ Well, it is. These days, we operate as if Chicago is on the border.”

. . . . “This is the first time we’ve been seeing it — cartels who have their operatives actually sent here,” said Richard Pearson, a lieutenant with the Louisville Metropolitan Police Department, which arrested four alleged operatives of the Zetas cartel in November in the suburb of Okolona. People who live on the tree-lined street where authorities seized more than 2,400 pounds of marijuana and more than $1 million in cash were shocked to learn their low-key neighbors were accused of working for one of Mexico’s most violent drug syndicates, Pearson said.

. . . . In Chicago, the police commander who oversees narcotics investigations, James O’Grady, said street-gang disputes over turf account for most of the city’s uptick in murders last year, when slayings topped 500 for the first time since 2008. Although the cartels aren’t dictating the territorial wars, they are the source of drugs. Riley’s assessment is stark: He argues that the cartels should be seen as an underlying cause of Chicago’s disturbingly high murder rate. “They are the puppeteers,” he said. “Maybe the shooter didn’t know and maybe the victim didn’t know that. But if you follow it down the line, the cartels are ultimately responsible.”

* * *

Google Revolution Isn’t Worth Our Privacy
Evgeny Morozov, Notes EM (reprinted from Financial Times), April 5, 2013

[EDITOR’S NOTE: Last year I abandoned Google’s search engine (for DuckDuckGo), Google Mail (for Zoho), Google Docs (for various substitutes), and Google Reader (for Netvibes) because of the company’s decision, mentioned by Morozov in this op-ed, to sew together privacy data from more than 60 of its products/services into a single, mega-master One Profile to Rule Them All for each of its users. Here, Morozov lays out some of the more far-reaching intentions behind, and meanings and implications of, Google’s move. Be sure to read his words in the mutually illuminating light of the article directly below about the new Facebook phone.]

Let’s give credit where it is due: Google is not hiding its revolutionary ambitions. As its co-founder Larry Page put it in 2004, eventually its search function “will be included in people’s brains” so that “when you think about something and don’t really know much about it, you will automatically get information”.

Science fiction? The implant is a rhetorical flourish but Mr Page’s utopian project is not a distant dream. In reality, the implant does not have be connected to our brains. We carry it in our pockets — it’s called a smartphone.

So long as Google can interpret — and predict — our intentions, Mr Page’s vision of a continuous and frictionless information supply could be fulfilled. However, to realise this vision, Google needs a wealth of data about us. Knowing what we search for helps — but so does knowing about our movements, our surroundings, our daily routines and our favourite cat videos.

. . . . [W]hen last year Google announced its privacy policy, which would bring the data collected through its more than 60 online services under one roof, that move made sense. The obvious reason for doing so is to make individual user profiles even more appealing to advertisers: when Google tracks you it can predict what ads to serve you much better than when it tracks you only across one such service.

But there is another reason, of course — and it has to do with the Grand Implant Agenda: the more Google knows about us, the easier it can make predictions about what we want – or will want in the near future. Google Now, the company’s latest offering, is meant to do just that: by tracking our every email, appointment and social networking activity, it can predict where we need to be, when, and with whom. Perhaps, it might even order a car to drive us there — the whole point is to relieve us of active decision-making. The implant future is already here — it’s just not evenly resisted.

* * *

The Soul of a New (Facebook) Machine
Alexis C. Madrigal, The Atlantic, April 4, 2013

Teaser:  Facebook finally brings a phone to market, sort of.

[T]he biggest play here is not technical or strategic, but rhetorical. Facebook wants to change the way people think about technologies. . . . Throughout Zuckerberg’s talk, people and Facebook friends were used interchangeably. And for Zuckerberg and his employees, I think this is technically true. For them, all the people they care about are not only on Facebook, but active users who devote time and resources to building digital streams that are legible to other people as their lives. So, while you can read the Facebook phone announcement as the story of the company’s deeper integration with Google’s Android operating system, I also read Facebook Home as a story of the integration that Facebook’s employees have with their own product. And they’d like for the rest of the world to experience what they do.

. . . . Why do I think it is so important not to allow Zuckerberg to redefine “people” as “Facebook friends”? Because we need to be able to evaluate this technology’s impact very specifically within Facebook’s culture and aims.  Facebook Home is not a story about “making the world more open and connected,” in general. This a story about Facebook “making the world more open and connected,” with all the specific definitions the company brings to those ideas.

. . . . It’s not that I think Facebook communications are inferior to other ones, whether that’s face-to-face, Twitter, talking on the phone, or standard text messaging. That’s not the point. The point is that they are *not the same* as these other things.

. . . . Will it be worth opening up every part of your phone interaction to Facebook in order to access that experience? Do you want your definition of a computer to center on Facebook Friends and the limited et [sic] of actions you can take with them? I can’t answer that for you, but I can say that it is a tradeoff, and the more you think about it, the better.

* * *

The Meme Hustler
Evgeny Morozov, The Baffler No. 22 (April 8, 2013)

[EDITOR’S NOTE: Yes, it’s Morozov again. The man is all but ubiquitous today, and that’s a good thing, because he’s pointedly worth listening to. In the case of this particular piece, he’s pointedly worth listening to very slowly and deeply, because this is some seriously insightful — and darkly, counterculturally revolutionary — stuff that he’s laying out about the hijacking of our collective cultural discourse by a kind of linguistic-conceptual virus that disguises the ideological core assumptions of digital techno-utopianism under a cloak of inevitability, so that any serious critical examination of them becomes literally unthinkable.]

While the brightest minds of Silicon Valley are “disrupting” whatever industry is too crippled to fend off their advances, something odd is happening to our language. Old, trusted words no longer mean what they used to mean; often, they don’t mean anything at all. Our language, much like everything these days, has been hacked. Fuzzy, contentious, and complex ideas have been stripped of their subversive connotations and replaced by cleaner, shinier, and emptier alternatives; long-running debates about politics, rights, and freedoms have been recast in the seemingly natural language of economics, innovation, and efficiency. Complexity, as it turns out, is not particularly viral.

. . . [A] clique of techno-entrepreneurs has hijacked our language and, with it, our reason. In the last decade or so, Silicon Valley has triggered its own wave of linguistic innovation, a wave so massive that a completely new way to analyze and describe the world — a silicon mentality of sorts — has emerged in its wake. The old language has been rendered useless; our pre-Internet vocabulary, we are told, needs an upgrade.

. . . That we would eventually be robbed of a meaningful language to discuss technology was entirely predictable. That the conceptual imperialism of Silicon Valley would also pollute the rest of our vocabulary wasn’t.

The enduring emptiness of our technology debates has one main cause, and his name is Tim O’Reilly. The founder and CEO of O’Reilly Media, a seemingly omnipotent publisher of technology books and a tireless organizer of trendy conferences, O’Reilly is one of the most influential thinkers in Silicon Valley. Entire fields of thought — from computing to management theory to public administration — have already surrendered to his buzzwordophilia, but O’Reilly keeps pressing on. Over the past fifteen years, he has given us such gems of analytical precision as “open source,” “Web 2.0,” “government as a platform,” and “architecture of participation.” O’Reilly doesn’t coin all of his favorite expressions, but he promotes them with religious zeal and enviable perseverance. While Washington prides itself on Frank Luntz, the Republican strategist who rebranded “global warming” as “climate change” and turned “estate tax” into “death tax,” Silicon Valley has found its own Frank Luntz in Tim O’Reilly.

* * *

Grof on Giger: The Transpersonal Nature of Art, Inspiration, and Creativity
Karey Pohn, Association for Holotropic Breathwork International, February 28, 2013 (reprinted from The Inner Door, May 2010)

[EDITOR’S NOTE: Stanislav Grof is an icon and a legend in the field of transpersonal psychology, and is one of the field’s founders. H. R. Giger is an icon and a legend in the world of art, having made his mark as a painter, sculptor, and set designer with a genius for the dark and surreal, with his most famous work probably being his Academy Award-winning design of the aliens and their environment in the Alien film franchise, followed closely by his breathtaking semi-Lovecraft-inspired paintings in the 1977 book Necronomicon. In this interview, Grof muses — pun definitely intended — on the transpersonal/transcendent sources of Giger’s inspiration.]

I first encountered his work in Necronomicon, which was a large format, high-quality paperback. I couldn’t believe what I saw. It was absolutely amazing. Now, I have a good understanding of him, not only because we have spent a lot of personal time together, but I had the chance to interview him for many, many hours for the book; and during that time, I was able to find out not only about his life but also about how he works.

It’s extraordinary. Some of his large paintings cover one wall in his house, and these amazing compositions are frequently arranged symmetrically. I found out that particularly when he is working with an airbrush, he has absolutely no idea what he is painting. He just begins in the left upper corner and aims the airbrush at the canvas. Then, as he told me, something just comes through, and he is himself surprised by what emerges.

In discussing Giger’s genius, I quote what Friedrich Nietzsche wrote in Thus Spoke Zarathustra (1885) about his own state of consciousness while creating:

If one had the smallest vestige of superstition left in one, it would hardly be possible to set aside the idea that one is mere incarnation, mouthpiece, or medium of an almighty power. The idea of revelation, in the sense that something, which profoundly convulses and shatters one, become suddenly visible and audible with indescribable certainty and accuracy, describes the simple fact. One hears—one does not seek; one takes—one does not ask who gives; a thought suddenly flashes up like lightning, it comes with necessity, without faltering—I never had any choice in the matter.

In essence, something grabs you and comes through, and you basically become a channel for it. You’re not really the creator of it. You’re a mediator. Hans Rudi certainly falls into that category.

* * *

The Visionary World of H. R. Giger (pdf), a.k.a. H. R. Giger and the Zeitgeist of the Twentieth Century
Stanislav Grof, October 2005

Several years ago, I had the privilege and pleasure to spend some time with Oliver Stone, visionary genius who has portrayed in his films with extraordinary artistic power the shadow side of modern humanity. At one point, we talked about Ridley Scott’s movie Alien and the discussion focused on H. R. Giger, whose creature and set designs were the key element in the film’s success. In the 1979 Academy Awards ceremony held at the Dorothy Chandler Pavilion in Los Angeles in April 1980, Giger received for his work on the Alien an Oscar for best achievement in visual effects.

I have known Giger’s work since the publication of his Necronomicon and have always felt a deep admiration for him, not only as an artistic genius, but also a visionary with an uncanny ability to depict the deep dark recesses of the human psyche revealed by modern consciousness research. In our discussion, I shared my feelings with Oliver Stone, who turned out to be himself a great admirer of Giger. His opinion about Giger and his place in the world of art and in human culture was very original and interesting. “I do not know anybody else,” he said, “who has so accurately portrayed the soul of modern humanity. A few decades from now when they will talk about the twentieth century, they will think of Giger.”

Although Oliver Stone’s statement momentarily surprised me by its extreme nature, I immediately realized that it reflected a profound truth. Since then, I often recalled this conversation when I was confronted with various disturbing aspects of the western industrial civilization and with the alarming developments in the countries affected by technological progress. There is no other artist who has captured with equal power the ills plaguing modern society – the rampaging technology taking over human life, suicidal destruction of the eco system of the earth, violence reaching apocalyptic proportions, sexual excesses, insanity of life driving people to mass consumption of tranquilizers and narcotic drugs, and the alienation individuals experience in relation to their bodies, to each other, and to nature.

. . . Giger’s art clearly comes from the depth of the collective unconscious, especially when we consider his prolific creative process. He reports that he often has no a priori concept of what a painting would look like. When creating some of his giant paintings, for instance, he started in the upper left corner and aimed the airbrush toward the canvas. The creative force was simply pouring through him, and he became its instrument. And yet the end result was a perfect composition and often showed remarkable bilateral symmetry.

. . . Giger’s determined quest for creative self-expression is inseparable from his relentless self-exploration and self-healing. In the analytic psychology of C. G. Jung, integration of the Shadow and the Anima, two quintessential motifs in Giger’s art, are seen as critical therapeutic steps in what Jung calls the process of individuation. Giger himself experiences his art as healing and as an important way to maintain his sanity. His art can also have a healing impact on those who are open to it because, like a Greek tragedy, it can facilitate powerful emotional catharsis for the viewers by exposing and revealing dark secrets of the human psyche.

Screen society vs. our capacity for humanity

Here’s reason number ten thousand and one for why you really ought to shut down your browser/tablet/smartphone and reenter the existential immediacy of your actual surrounding environment with its network of in-person social relationships just as soon as you finish reading this and then clicking through to read the full, brief article from which it’s excerpted:

Most of us are well aware of the convenience that instant electronic access provides. Less has been said about the costs. Research that my colleagues and I have just completed, to be published in a forthcoming issue of Psychological Science, suggests that one measurable toll may be on our biological capacity to connect with other people.

. . . Your brain is tied to your heart by your vagus nerve. . . [In addition to the fact that the relative strength of this brain-heart connection is related to overall physical health], the behavioral neuroscientist Stephen Porges has shown that vagal tone is central to things like facial expressivity and the ability to tune in to the frequency of the human voice. By increasing people’s vagal tone, we increase their capacity for connection, friendship and empathy. In short, the more attuned to others you become, the healthier you become, and vice versa. This mutual influence also explains how a lack of positive social contact diminishes people. Your heart’s capacity for friendship also obeys the biological law of “use it or lose it.” If you don’t regularly exercise your ability to connect face to face, you’ll eventually find yourself lacking some of the basic biological capacity to do so.

. . . When you share a smile or laugh with someone face to face, a discernible synchrony emerges between you, as your gestures and biochemistries, even your respective neural firings, come to mirror each other. It’s micro-moments like these, in which a wave of good feeling rolls through two brains and bodies at once, that build your capacity to empathize as well as to improve your health. If you don’t regularly exercise this capacity, it withers. Lucky for us, connecting with others does good and feels good, and opportunities to do so abound.

So the next time you see a friend, or a child, spending too much of their day facing a screen, extend a hand and invite him back to the world of real social encounters. You’ll not only build up his health and empathic skills, but yours as well. Friends don’t let friends lose their capacity for humanity.

More at The New York Times: “Your Phone vs. Your Heart