The following paragraphs are from a talk delivered by Pinboard founder Maciej Cegłowski at the recent Emerging Technologies for the Enterprise conference in Philadelphia. Citing as Exhibit A the colossal train wreck that was the 2016 American presidential election, Cegłowski basically explains how, in the current version of the Internet that has emerged over the past decade-plus, we have collectively created a technology that is perfectly calibrated for undermining Western democratic societies and ideals.
But as incisive as his analysis is, I seriously doubt that his (equally incisive) proposed solutions, described later in the piece, will ever be implemented to any meaningful extent. I mean, if we’re going to employ the explicitly Frankensteinian metaphor of “building a monster,” then it’s important to bear in mind that Victor Frankenstein and his wretched creation did not find their way to anything resembling a happy ending. (And note that Cegłowski himself acknowledges as much at the end of his piece when he closes his discussion of proposed solutions by asserting that “even though we’re likely to fail, all we can do is try.”)
This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things,” some of what we knocked down were the load-bearing walls of our democracy. . . .
A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 [sic] election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea. . . .
We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we’re good people. We like freedom. How could we have built tools that subvert it? . . .
The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills. . . .
Orwell imagined a world in which the state could shamelessly rewrite the past. The Internet has taught us that people are happy to do this work themselves, provided they have their peer group with them, and a common enemy to unite against. They will happily construct alternative realities for themselves, and adjust them as necessary to fit the changing facts . . . .
A lot of what we call “disruption” in the tech industry has just been killing flawed but established institutions, and mining them for parts. When we do this, we make a dangerous assumption about our ability to undo our own bad decisions, or the time span required to build institutions that match the needs of new realities.
Right now, a small caste of programmers is in charge of the surveillance economy, and has broad latitude to change it. But this situation will not last for long. The kinds of black-box machine learning that have been so successful in the age of mass surveillance are going to become commoditized and will no longer require skilled artisans to deploy. . . .
Unless something happens to mobilize the tech workforce, or unless the advertising bubble finally bursts, we can expect the weird, topsy-turvy status quo of 2017 to solidify into the new reality.
If you find yourself in Waco, Texas in October 2013 — specifically, on Friday, October 25 — and you’re in the mood to celebrate the Halloween horror season in style, be sure to come join us for the fourth annual Dark Mirror horror film festival. Four classic horror films. Informative introductory talks by vampire expert and religion scholar Dr. J. Gordon Melton, Baylor University film professor Dr. Jim Kendrick, and yours truly. All the junk food you care to buy (popcorn, candy, chili cheese nachos, hot dogs, soda, water). What’s not to like?
I’m presently teaching a sophomore college course about horror and science fiction in literature and film. (You can view the syllabus online.) Yesterday’s class meeting was devoted to introducing Mary Shelley and Frankenstein by giving background on Mary’s life and describing the epic, shadowy, amazing, uncanny, utterly mythic summer of 1816, when Mary stayed with Percy Shelley, Lord Byron, and Doctor John Polidori at the Villa Diodati near Lake Geneva, Switzerland, and both the literary vampire (leading directly to Dracula seven decades later) and the Frankenstein myth were born out of the group’s heady conversations about ghost stories and cutting-edge science that unfolded around the fire.
More specifically, these horror icons were born from the horror-writing contest that Byron suggested they undertake in order to pass their time during that eerie “year without a summer,” which was marked by Armageddon-ish weather, crop failure, famine, and epidemics in Europe, Britain, and America (with effects in Asian countries as well) as “the last great subsistence crisis in the western world” unfolded when Mount Tambora in Indonesia erupted and blanketed the atmosphere with an obscuring cloud of ash.
I’ve often thought this spontaneous nexus of events — a myth-level natural catastrophe coinciding with the philosophical and literary birth of two iconic/mythic figures in the gothic and horror field — sounds like a fictional tale of its own, something that someone might make up as a dark and fascinating horror story. Maybe that’s why the events surrounding Frankenstein’s birth have long been nearly as famous as the novel itself (a fact helped, of course, by Mary’s account of that summer and the book’s genesis in her introduction to the standard 1831 edition). It has been made into two separate movies — or maybe I’m forgetting that there are more than that — and referenced in partial form many more times, from the introductory segment to 1973’s not-bad television movie Frankenstein: The True Story to the segments involving Mary, Percy, and Co. in the not-bad 1990 film adaptation of Brian Aldiss’s Frankenstein Unbound to the charming prologue of director James Whale’s Bride of Frankenstein in 1935. The summer of 1816 at the Villa Diodati and environs is like a living novel, a manifestation of fiction in history, replete with obvious, even glaring, symbolism, and planted firmly in the gothic horror genre.
And that’s really all I have to say in this hastily written post. I think I’m still riding on momentum from yesterday’s class session, where I did a brain dump about all of these things, leaving it to the PowerPoint presentation that I had put together ahead of time to keep me on something resembling a coherent path as I talked excitedly about a mega-subject that has kept me entranced with fascination for the past 25 years or so.
Add to that, of course, the fact that some people have interpreted Mary Shelley’s description of the “waking dream” in which she received the inspiration for Frankenstein as an episode of sleep paralysis — a supposition made all the more probable, or at least suggestive and evocative, by the fact that she and her family knew Henry Fuseli, the famous painter of The Nightmare, that master image of both the gothic horror genre and sleep paralysis studies, and by the additional fact that she actually gave a deliberate “quote” of that painting in the mise-en-scène of the moment when Victor Frankenstein bursts into the bridal bedroom to find Elizabeth flung backward, dead, across the bed while the monster leers from the window above. James Whale likewise quoted the same staging in his 1931 cinematic vision/version. The fascination factor, as we might call it, is unbelievably high here.
It was a total accident, by the way, and something I didn’t realize until three days ago, that I began teaching this literature course, with Frankenstein as the first assigned text, right as August 30 marked Mary Shelley’s 216th birthday and was being hailed as “Frankenstein Day” all over the Interwebs.
Here: watch these. They’re good medicine, all (especially the last two).
And speaking — as I did just yesterday — of Mary Shelley and Frankenstein, here’s author and history professor Michael Saler discussing two new books about Ms. Shelley and her novel (The Annotated Frankenstein and The Lady and Her Monsters: A Tale of Dissections, Real-Life Dr. Frankensteins, and the Creation of Mary Shelley’s Masterpiece)for The Times Literary Supplement:
The child may be father to the man, but how did a girl become mother to the monster? We continue to ask that of Mary Shelley, who wrote Frankenstein, or the Modern Prometheus (1818) before she turned twenty. It is a startling work from someone so young, combining profound philosophic disquisitions with melodramatic blood and thunder. Some see it as the first science fiction novel, but as Roseanne Montillo shows in The Lady and Her Monsters, Shelley’s narrative of a scientist’s quest to discover and harness the “principle of life” was less an extrapolation into the future than a faithful representation of contemporary practices. Indeed, Frankenstein is one of the earliest horror novels about modernity, directly confronting the instabilities provoked by the scientific, Industrial and French Revolutions.
. . . The first edition of this rich and ambiguous work didn’t fly off the shelves. But it was resurrected in 1823 — this time published under its author’s name — as a result of a popular stage adaptation that promoted the monster rather than the philosophy. Shelley herself pursued this lucrative strategy in her “Introduction” to a revised edition of the novel in 1831, which immediately became a bestseller. Here she situated the work’s genesis in the ghost-story tradition, recalling the summer of 1816 when she joined a party visiting Lord Byron in Switzerland. After several dark and stormy nights spent reading ghost stories, Byron suggested they write their own. Shelley retrospectively claimed she intended to write one that would “awaken thrilling horror”.
The 1831 edition was no longer dedicated to her father the Enlightenment philosopher. Instead, it featured the first book illustration of the nameless “monster”. Shelley bid her “hideous progeny go forth and prosper”, which it did, especially after the release of James Whale’s film version in 1931. Boris Karloff delivered a poignant performance as the monster, now saddled with a “criminal brain” and rendered inarticulate. Shelley’s confrontation with modernity was briefly effaced: but it would never remain buried for long.
More at The Times Literary Supplement: “Enlightened monsters“
Image: Portrait of Mary Shelley, 1840, by Richard Rothwell [Public domain], via Wikimedia Commons
Here’s British author and journalist Steven Poole, writing for Aeon magazine in an article published just today and titled “Slaves to the Algorithm“:
Our age elevates the precision-tooled power of the algorithm over flawed human judgment. From web search to marketing and stock-trading, and even education and policing, the power of computers that crunch data according to complex sets of if-then rules is promised to make our lives better in every way. Automated retailers will tell you which book you want to read next; dating websites will compute your perfect life-partner; self-driving cars will reduce accidents; crime will be predicted and prevented algorithmically. If only we minimise the input of messy human minds, we can all have better decisions made for us. So runs the hard sell of our current algorithm fetish.
. . . If you are feeling gloomy about the automation of higher education, the death of newspapers, and global warming, you might want to talk to someone — and there’s an algorithm for that, too. A new wave of smartphone apps with eccentric titular orthography (iStress, myinstantCOACH, MoodKit, BreakkUp) promise a psychotherapist in your pocket. Thus far they are not very intelligent, and require the user to do most of the work — though this second drawback could be said of many human counsellors too. Such apps hark back to one of the legendary milestones of ‘artificial intelligence’, the 1960s computer program called ELIZA. That system featured a mode in which it emulated Rogerian psychotherapy, responding to the user’s typed conversation with requests for amplification (‘Why do you say that?’) and picking up — with its ‘natural-language processing’ skills — on certain key words from the input. Rudimentary as it is, ELIZA can still seem spookily human. Its modern smartphone successors might be diverting, but this field presents an interesting challenge in the sense that, the more sophisticated it gets, the more potential for harm there will be. One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.
What lies behind our current rush to automate everything we can imagine? Perhaps it is an idea that has leaked out into the general culture from cognitive science and psychology over the past half-century — that our brains are imperfect computers. If so, surely replacing them with actual computers can have nothing but benefits. Yet even in fields where the algorithm’s job is a relatively pure exercise in number-crunching, things can go alarmingly wrong.
Here’s author and cultural critic John David Ebert, writing in The New Media Invasion: Digital Technologies and the World They Unmake (2011):
Everywhere we look nowadays, we find the same worship of the machine at the expense of the human being, who always comes out of the equation looking like an inconvenient, leftover remainder: instead of librarians to check out your books for you, a machine will do it better; instead of clerks to ring up your groceries for you, a self-checkout will do it better; instead of a real live DJ on the radio, an electronic one will do the job better; instead of a policeman to write you a traffic ticket, a camera (connected to a computer) will do it better. In other words . . . the human being is actually disappearing from his own society, just as the automobile long ago caused him to disappear from the streets of his cities . . . . [O]ur society is increasingly coming to be run and operated by machines instead of people. Machines are making more and more of our decisions for us; soon, they will be making all of them.
Here’s science fiction legend Brian Aldiss, writing in the first chapter of his seminal 1973 study Billion Year Spree: The True History of Science Fiction, titled “The Origin of the Species: Mary Shelley“:
For a thousand people familiar with the story of Victor creating his monster from selected cadaver spares and endowing them with new life, only to shrink back in horror from his own creation, not one will have read Mary Shelley’s original novel. This suggests something of the power of infiltration of this first great myth of the industrial age. [emphasis added]
Here’s literature scholar Christopher Small, writing in his (likewise influential) 1972 book Mary Shelley’s Frankenstein: Tracing the Myth:
The Monster is not a ghost. He is not a genie or a spirit summoned by magic from the deep; at the same time he issues, like these, from the imagination. He is manifestly a product, or aspect, of his maker’s psyche: he is a psychic phenomenon given objective, or ‘actual’ existence. A Doppelganger of ‘real flesh and blood’ is not unknown, of course, in other fictions, nor is the idea of a man created ‘by other means than Nature has hitherto provided’, the creation of Prometheus being the archetype. But Frankenstein is ‘the modern Prometheus’: the profound effect achieved by Mary lay in showing the Monster as the product of modern science; made, not by enchantment, i.e., directly by the unconscious, an ‘imaginary’ being, but through a process of scientific discovery, i.e., the imagination objectified.
Here’s the late, great cultural critic/historian and philosopher Theodore Roszak, writing in his fairly legendary 1973 book Where the Wasteland Ends: Politics and Transcendence in Postindustrial Society:
Long before the demonic possibilities of science had become clear for all to see, it was a Romantic novelist who foresaw the career of Dr. Frankenstein — and so gave us the richest (and darkest) literary myth the culture of science has produced.
Here’s Agent Smith, the artificial intelligence program in charge of keeping order within the simulated human reality of The Matrix (1999), speaking to the captured Morpheus, leader of the resistance movement against the machine civilization that has enslaved humans (in a film released in 1999):
As soon as we started thinking for you it really became our civilization, which is of course what this is all about. Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You’ve had your time. The future is our world, Morpheus. The future is our time.
Here’s Victor Frankenstein in the 1831 edition of Frankenstein, or The Modern Prometheus, lying on his deathbed and lamenting his former obsessive quest to create and “perfect” life, which led not only to his own utter wretchedness and destruction but to that of everybody he loved:
My limbs now tremble, and my eyes swim with the remembrance; but then a resistless and almost frantic impulse urged me forward; I seemed to have lost all soul or sensation but for this one pursuit.
. . . Do you share my madness? Have you drank also of the intoxicating draught? Hear me — let me reveal my tale, and you will dash the cup from your lips!
If one is looking for a guiding thread of supervening meaning or moral insight here, I might be inclined to borrow and recontextualize the words of legendary and visionary music producer Sandy Pearlman — from the liner notes to Blue Öyster Cult’s epic 1988 concept album Imaginos, about the centuries-long efforts of a transcendent pantheon of “Invisibles” to intercede in human history and guide it to a preordained conclusion — by suggesting that this whole situation portends, indicates, and represents “a disease with a long incubation.”
Images: “HAL9000” from 2001: A Space Odyssey by Cryteria (Own work) [CC-BY-3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons. “Frontispiece to Frankenstein 1831” by Theodore Von Holst (1810-1844) (Tate Britain. Private collection, Bath.) [Public domain], via Wikimedia Commons.
Dream researcher, Teeming Brain friend, and future Teeming Brain contributor Ryan Hurd — who has spoken about dreams, consciousness, sleep paralysis, and related matters at Stanford, Yale, UC Berkeley, the Rhine Center, and elsewhere — recently shared an account of an apparently precognitive dream that he personally experienced. As I was reading through it, in addition to finding his description of what happened to be rather fascinating, I found that a number of thoughts and recognitions were crowding forward from the peripheries of my awareness to announce the wider implications of such experiences. All of them have to do with the question of what’s really involved in and portended by exactly the philosophical effect Ryan identifies in connection with anomalous experiences in general, namely, a cracking of the “dam” of assumptions that lead most of us to explain away the significance of such anomalies for our worldview, or even to screen out a conscious acceptance and/or awareness of such things altogether. When this kind of breach in one’s personal cosmos is effected, the resulting flood of formerly rejected realities has the capacity to recast everything in ways that can be experienced as horrific, salvific, or even both at once. Read the rest of this entry
Throughout the 1990s the Clinton administration pushed hard for the universal integration of computers and information technology throughout America’s public education system, culminating in Bill Clinton’s official presidential call for “A computer in every classroom,” since, in his words, technology is “the great equalizer” for schools. No matter that it was an idea (and ideology) that was basically made up and lacking in any real support. No matter that, as Todd Oppenheimer incisively argued in a now-classic 1997 Atlantic article titled “The Computer Delusion” (and later in its 2003 book-length expansion, The Flickering Mind: The False Promise of Technology in the Classroom and How Learning Can Be Saved), “There is no good evidence that most uses of computers significantly improve teaching and learning, yet school districts are cutting programs — music, art, physical education — that enrich children’s lives to make room for this dubious nostrum, and the Clinton Administration has embraced the goal of ‘computers in every classroom’ with credulous and costly enthusiasm.” The techno-utopian impulse for America’s schools proved to be unstoppable on a practical level, and schools en masse, from kindergarten to college, bought into it on a proverbial hook, line, and sinker basis. The idea prevalent at administrative levels was and — as I can vouch from having spent the last decade-plus working in high school and college settings — still is that technology in and of itself is a Great Thing that will Revolutionize Learning. Even though many individual administrators and teachers are quite savvy and sensitive to the nuances of the techno-utopian gospel, the overall institutional-cultural pressure is overwhelmingly in the direction of uncritical adoption.