Robert Kaplan, writing for The Washington Post:
It is impossible to imagine Trump and his repeated big lies that go viral except in the digital-video age. It is impossible to imagine our present political polarization except in the age of the Internet, which drives people to sites of extreme views that validate their preexisting prejudices. And, in the spirit of Hollywood, it is impossible to imagine the degree and intensity of emotional and sensory manipulation, false rumors, exaggerations and character assassination that decay our public dialogue except in this new and terrifying age of technology which has only just begun.
Digital-video technology, precisely because it is given to manipulation, is inherently controlling. Think of how the great film directors of the 20th century were able to take over your mind for a few hours: a new experience for audiences that previous generations had never known. Theater may be as old as the ancient Greeks, but the technology of film lent a new and powerful force to the theatrical experience. Moreover, it was contained within a limited time period, and afterward you came back to the real world.
In the 21st century, dictators may have the capability to be the equivalent of film directors, and the show never stops. Indeed, Joseph Goebbels would undoubtedly thrive in today’s world. As for warfare itself, it will be increasingly about dividing and demoralizing enemy populations through disinformation campaigns whose techniques are still in their infancy. . . .
Never before have we had to fight for democracy and individual rights as now in this new and — in some sense — dark age of technology. We must realize that the fight for democracy is synonymous with the fight for objectivity, which lies at the core of professional journalism — a calling whose foundational spirit was forged in the print-and-typewriter age, when mainly the movies were fake.
We will fight best by thinking tragically to avoid tragedy. This means learning to think like the tyrants who feed and prosper on misinformation so we can keep several steps ahead of them. Only in that way can we build safeguards against the specific dangers of the digital experience. The pioneers of Silicon Valley were inherent optimists who simply believed in connecting the world. But it is precisely such integration that provides our authoritarian enemies with access into our own democratic systems. The future will be about wars of integration rather than wars of geographic separation. So now constructive pessimism is called for. The innocent days when illusions were the province of movie stage sets are way behind us.
Full text: “Everything Here Is Fake“
Here’s renowned neuroscientist Christopher Koch explaining in a Wall Street Journal piece that our future will be a dystopian nightmare in which humans will necessarily become ever more completely fused on a neurological level with super sophisticated computer technologies. This will, he says, be a non-negotiable requirement if we want to keep up with the artificial intelligences that will be billions of times smarter than us, and that will otherwise utterly rule humanity and pose an existential threat to us in all kinds of ways that we, with our currently unenhanced meat brains, can hardly imagine.
Or actually, Koch speaks not grimly but enthusiastically of this future (and semi-present) scenario. He views the technological enhancement of the human brain for purposes of keeping pace with AI as an exciting thing. The negative gloss on it is mine. What a wonderful world, he avers. “Resistance is futile. You will be assimilated,” my own meat brain keeps hearing.
Whether you are among those who believe that the arrival of human-level AI signals the dawn of paradise, such as the technologist Ray Kurzweil, or the sunset of the age of humans, such as the prominent voices of the philosopher Nick Bostrom, the physicist Stephen Hawking and the entrepreneur Elon Musk, there is no question that AI will profoundly influence the fate of humanity.
There is one way to deal with this growing threat to our way of life. Instead of limiting further research into AI, we should turn it in an exciting new direction. To keep up with the machines we’re creating, we must move quickly to upgrade our own organic computing machines: We must create technologies to enhance the processing and learning capabilities of the human brain. . . .
Unlike say, the speed of light, there are no known theoretical limits to intelligence. While our brain’s computational power is more or less fixed by evolution, computers are constantly growing in power and flexibility. This is made possible by a vast ecosystem of several hundred thousand hardware and software engineers building on each other’s freely shared advances and discoveries. How can the human species keep up? . . .
In the face of this relentless onslaught, we must actively shape our future to avoid dystopia. We need to enhance our cognitive capabilities by directly intervening in our nervous systems.
We are already taking steps in this direction. . . .
My hope is that someday, a person could visualize a concept — say, the U.S. Constitution. An implant in his visual cortex would read this image, wirelessly access the relevant online Wikipedia page and then write its content back into the visual cortex, so that he can read the webpage with his mind’s eye. All of this would happen at the speed of thought. Another implant could translate a vague thought into a precise and error-free piece of digital code, turning anyone into a programmer.
People could set their brains to keep their focus on a task for hours on end, or control the length and depth of their sleep at will.
Another exciting prospect is melding two or more brains into a single conscious mind by direct neuron-to-neuron links — similar to the corpus callosum, the bundle of two hundred million fibers that link the two cortical hemispheres of a person’s brain. This entity could call upon the memories and skills of its member brains, but would act as one “group” consciousness, with a single, integrated purpose to coordinate highly complex activities across many bodies.
These ideas are compatible with everything we know about the brain and the mind. Turning them from science fiction into science fact requires a crash program to design safe, inexpensive, reliable and long-lasting devices and procedures for manipulating brain processes inside their protective shell. It must be focused on the end-to-end enhancement of human capabilities. . . .
While the 20th century was the century of physics — think the atomic bomb, the laser and the transistor — this will be the century of the brain. In particular, it will be the century of the human brain — the most complex piece of highly excitable matter in the known universe. It is within our reach to enhance it, to reach for something immensely powerful we can barely discern.
Full article: “To Keep Up with AI, We’ll Need High-Tech Brains” (You may or may not encounter a paywall)
This one-minute film by neophyte French filmmaker Gaspar Palacio is just brilliant. And I don’t use that word lightly. It’s like a master class in cinematic microfiction. Here’s how Palacio describes it at Vimeo:
The one minute tale of a survivalist. When the siren rings in the distance, a family has to get inside the shelter. Nothing will ever be the same again.
At Digg it’s described like this:
When the sirens started blaring, the survivalist was ready. He had been planning for this all along.
Of additional interest: The film’s writer, Robert J. Lee, runs a site titled Two Pages a Week, where he shares two-page film scripts that he began to produce weekly after someone challenged him to do so. The site currently features more than seventy of them.
I’m confident that what follows is the best paragraph I’ll read this week. I daresay it may be the best one you’ll read, too. Unsurprisingly, it’s from James Howard Kunstler’s blog. For me, it provides both a substantively and a tonally accurate description of what I’ve been seeing, hearing, and experiencing around me in recent weeks and months (and years).
Poor old Karl Marx, tortured by boils and phantoms, was right about one thing: History repeats itself, first as tragedy, second as farce. Thus, I give you the Roman Empire and now the United States of America. Rome surrendered to time and entropy. Our method is to drive a gigantic clown car into a ditch.
BONUS ITEM: Here’s the best headline I’ve read in recent memory. The story itself resides behind a paywall at The Washington Post, so I don’t know what it actually says, but the headline alone probably says it all:
Rocket man and dotard go bonkers in toontown
I can’t help wondering if this headline might serve for future generations as some sort of quasi/crypto-Zen koan of esoteric fascination, in the same way that “No Wife, No Horse, No Mustache” worked for Robert Anton Wilson.
This brief video essay on the source of our collective craving for “the awful futures of apocalyptic fiction” is really well done. Skillfully executed and thought-provoking. A worthwhile investment of five reflective minutes. Here’s the description:
In the first two decades of the new millennium, stories of the post-apocalypse have permeated pop culture, from books such as Cormac McCarthy’s The Road (2006), Paolo Bacigalupi’s The Windup Girl (2009) and Emily St John Mandel’s Station Eleven (2014) to films and TV programmes such as The Walking Dead (2010-), the Hunger Games series (2012-15) and Mad Max: Fury Road (2015). While post-apocalyptic fictions of previous eras largely served as cautionary tales — against nuclear brinksmanship in On the Beach (1959) or weaponised biology in The Stand (1978) — today’s versions of these tales depict less alterable, more oblique and diffuse visions of our doom. So why can’t we seem to get enough of humanity’s unavoidable collapse and its bleak aftermath?
Dispatches from the Ruins reflects on what these stories — set among crumbling buildings, overgrown lots and barren wastelands — might be telling us about modern fears and fantasies. This Aeon original video is adapted from an Aeon essay by the US writer Frank Bures. Bures is also the author of The Geography of Madness (2016), a book about cultural syndromes across the world. His work has been included in the Best American Travel Writing and appeared in Harper’s, Lapham’s Quarterly and the Washington Post Magazine, among others.
The following paragraphs are from a talk delivered by Pinboard founder Maciej Cegłowski at the recent Emerging Technologies for the Enterprise conference in Philadelphia. Citing as Exhibit A the colossal train wreck that was the 2016 American presidential election, Cegłowski basically explains how, in the current version of the Internet that has emerged over the past decade-plus, we have collectively created a technology that is perfectly calibrated for undermining Western democratic societies and ideals.
But as incisive as his analysis is, I seriously doubt that his (equally incisive) proposed solutions, described later in the piece, will ever be implemented to any meaningful extent. I mean, if we’re going to employ the explicitly Frankensteinian metaphor of “building a monster,” then it’s important to bear in mind that Victor Frankenstein and his wretched creation did not find their way to anything resembling a happy ending. (And note that Cegłowski himself acknowledges as much at the end of his piece when he closes his discussion of proposed solutions by asserting that “even though we’re likely to fail, all we can do is try.”)
This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things,” some of what we knocked down were the load-bearing walls of our democracy. . . .
A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 [sic] election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea. . . .
We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we’re good people. We like freedom. How could we have built tools that subvert it? . . .
The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills. . . .
Orwell imagined a world in which the state could shamelessly rewrite the past. The Internet has taught us that people are happy to do this work themselves, provided they have their peer group with them, and a common enemy to unite against. They will happily construct alternative realities for themselves, and adjust them as necessary to fit the changing facts . . . .
A lot of what we call “disruption” in the tech industry has just been killing flawed but established institutions, and mining them for parts. When we do this, we make a dangerous assumption about our ability to undo our own bad decisions, or the time span required to build institutions that match the needs of new realities.
Right now, a small caste of programmers is in charge of the surveillance economy, and has broad latitude to change it. But this situation will not last for long. The kinds of black-box machine learning that have been so successful in the age of mass surveillance are going to become commoditized and will no longer require skilled artisans to deploy. . . .
Unless something happens to mobilize the tech workforce, or unless the advertising bubble finally bursts, we can expect the weird, topsy-turvy status quo of 2017 to solidify into the new reality.
Here in North Texas we’re currently experiencing the warmest start to a year on record. This comes on the heels of the warmest winter in Texas history. A few years ago we had the dramatic wildfire apocalypse — enabled by an epic drought — that engulfed huge portions of the state, and that had me nervously watching huge plumes of smoke billow up from behind the hillside in back of my house. The drought was ended by historic flooding. The same year as the floods, a positively crazy chain of severe spring thunderstorms tore right through the area where my family and I live, spawning a line of repeated tornadoes, one after the other, all afternoon and overnight. This is something that has always been more common back in the Missouri Ozarks where I’m from. Nor was the perception of something different down here merely a subjective one; 2015 ended up being a record year for tornadoes in Texas. Last year there was more severe flooding, including right where I live. Thus far, my entire time in Texas has been marked by one natural disaster after another. And to think, one reason my family and I moved down here in the first place was to leave behind the increasingly severe weather in Missouri, especially the brutal winters where crippling ice storms have become much more frequent during the past ten and fifteen years than they were during my entire previous life up there.
So in light of such things, this meditation in The New York Times Magazine on not just the future but the present reality of climate change really hits home.
The future we’ve been warned about is beginning to saturate the present. We tend to imagine climate change as a destroyer. But it also traffics in disruption, disarray: increasingly frequent and more powerful storms and droughts; heightened flooding; expanded ranges of pests turning forests into fuel for wildfires; stretches of inhospitable heat. So many facets of our existence — agriculture, transportation, cities and the architecture they spawned — were designed to suit specific environments. Now they are being slowly transplanted into different, more volatile ones, without ever actually moving. . . .
We seem able to normalize catastrophes as we absorb them, a phenomenon that points to what Peter Kahn, a professor of psychology at the University of Washington, calls “environmental generational amnesia.” Each generation, Kahn argues, can recognize only the ecological changes its members witness during their lifetimes. . . .
Scenarios that might sound dystopian or satirical as broad-strokes future projections unassumingly materialize as reality. Last year, melting permafrost in Siberia released a strain of anthrax, which had been sealed in a frozen reindeer carcass, sickening 100 people and killing one child. In July 2015, during the hottest month ever recorded on earth (until the following year), and the hottest day ever recorded in England (until the following summer), the Guardian newspaper had to shut down its live-blogging of the heat wave when the servers overheated. And low-lying cities around the world are experiencing increased “clear-sky flooding,” in which streets or entire neighborhoods are washed out temporarily by high tides and storm surges. Parts of Washington now experience flooding 30 days a year, a figure that has roughly quadrupled since 1960. In Wilmington, N.C., the number is 90 days. But scientists and city planners have conjured a term of art that defuses that astonishing reality: “nuisance flooding,” they call it.
Kahn calls our environmental generational amnesia “one of the central psychological problems of our lifetime,” because it obscures the magnitude of so many concrete problems. You can wind up not looking away, exactly, but zoomed in too tightly to see things for what they are. Still, the tide is always rising in the background, swallowing something. And the longer you live, the more anxiously trapped you may feel between the losses already sustained and the ones you see coming. . . .
The future is always somebody else’s present — it will very likely feel as authentic, and only as horrific, as our moment does to us. But the present is also somebody else’s future: We are already standing on someone else’s ludicrous map. Except none of us are in on the joke, and I’m guessing that it won’t feel funny any time soon.
Here’s science writer Carrie Arnold, in a newly published article at Aeon titled “Watchers of the Earth,” discussing the possibility that indigenous myths may carry warning signals for natural disasters:
Shortly before 8am on 26 December 2004, the cicadas fell silent and the ground shook in dismay. The Moken, an isolated tribe on the Andaman Islands in the Indian Ocean, knew that the Laboon, the ‘wave that eats people’, had stirred from his ocean lair. The Moken also knew what was next: a towering wall of water washing over their island, cleansing it of all that was evil and impure. To heed the Laboon’s warning signs, elders told their children, run to high ground.
The tiny Andaman and Nicobar Islands were directly in the path of the tsunami generated by the magnitude 9.1 earthquake off the coast of Sumatra. Final totals put the islands’ death toll at 1,879, with another 5,600 people missing. When relief workers finally came ashore, however, they realised that the death toll was skewed. The islanders who had heard the stories about the Laboon or similar mythological figures survived the tsunami essentially unscathed. Most of the casualties occurred in the southern Nicobar Islands. Part of the reason was the area’s geography, which generated a higher wave. But also at the root was the lack of a legacy; many residents in the city of Port Blair were outsiders, leaving them with no indigenous tsunami warning system to guide them to higher ground.
Humanity has always courted disaster. We have lived, died and even thrived alongside vengeful volcanoes and merciless waves. Some disasters arrive without warning, leaving survival to luck. Often, however, there is a small window of time giving people a chance to escape. Learning how to crack open this window can be difficult when a given catastrophe strikes once every few generations. So humans passed down stories through the ages that helped cultures to cope when disaster inevitably struck. These stories were fodder for anthropologists and social scientists, but in the past decade, geologists have begun to pay more attention to how indigenous peoples understood, and prepared for, disaster. These stories, which couched myth in metaphor, could ultimately help scientists prepare for cataclysms to come.
Reading this triggered a flood of associated thoughts this morning, mostly related to things I’ve read elsewhere that resonate with it. Although the basic focus is different, for me this article somewhat recalls a starkly apocalyptic and millenarian passage from the ending to Benjamin Hoff’s The Te of Piglet (1992), a book that many readers found off-putting for its semi-grimness, which represented a departure from the more charmingly whimsical presentation of Taoism that Hoff had adopted in its predecessor, The Tao of Pooh: Read the rest of this entry
Greetings, Teeming Brainers. I’m just peeking in from the digital wings, amid much ongoing blog silence, to observe that many of the issues and developments — sociocultural, technological, and more — that I began furiously tracking here way back in 2006 are continuing to head in pretty much the same direction. A case in point is provided by the alarming information, presented in a frankly alarmed tone, that appears in this new piece from Scientific American (originally published in SA’s German-language sister publication, Spektrum der Wissenschaft):
Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This information is based on personal and meta-data that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially, the identity of the user is protected, it can, in practice, be inferred quite easily. Today, algorithms know pretty well what we do, what we think and how we feel — possibly even better than our friends and family or even ourselves. Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled ever more successfully in this manner. The more is known about us, the less likely our choices are to be free and not predetermined by others.
But it won’t stop there. Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people. . . .
[I]t can be said that we are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society — for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path — a path that allows us all to benefit from the digital revolution
Oh, and for a concrete illustration of all the above, check this out:
How would behavioural and social control impact our lives? The concept of a Citizen Score, which is now being implemented in China, gives an idea. There, all citizens are rated on a one-dimensional ranking scale. Everything they do gives plus or minus points. This is not only aimed at mass surveillance. The score depends on an individual’s clicks on the Internet and their politically-correct conduct or not, and it determines their credit terms, their access to certain jobs, and travel visas. Therefore, the Citizen Score is about behavioural and social control. Even the behaviour of friends and acquaintances affects this score, i.e. the principle of clan liability is also applied: everyone becomes both a guardian of virtue and a kind of snooping informant, at the same time; unorthodox thinkers are isolated. Were similar principles to spread in democratic countries, it would be ultimately irrelevant whether it was the state or influential companies that set the rules. In both cases, the pillars of democracy would be directly threatened.
FULL ARTICLE: “Will Democracy Survive Big Data and Artificial Intelligence?”
Of course, none of this is real news to anybody who has been paying attention. It’s just something that people like me, and maybe like you, find troubling enough to highlight and comment on. And maybe, in the end, Cipher from The Matrix will turn out to have been right: Maybe ignorance really is bliss. Because from where I’m sitting, there doesn’t appear to be anything one can do to stop this streamrollering, metastasizing, runaway train-like dystopian trend. Talking about it is just that: talk. Which is one reason why I’ve lost a portion of the will that originally kept me blogging here for so many years. You can only play the role of Cassandra for so long before the intrinsic attraction begins to dissipate. Read the rest of this entry