Here’s renowned neuroscientist Christopher Koch explaining in a Wall Street Journal piece that our future will be a dystopian nightmare in which humans will necessarily become ever more completely fused on a neurological level with super sophisticated computer technologies. This will, he says, be a non-negotiable requirement if we want to keep up with the artificial intelligences that will be billions of times smarter than us, and that will otherwise utterly rule humanity and pose an existential threat to us in all kinds of ways that we, with our currently unenhanced meat brains, can hardly imagine.
Or actually, Koch speaks not grimly but enthusiastically of this future (and semi-present) scenario. He views the technological enhancement of the human brain for purposes of keeping pace with AI as an exciting thing. The negative gloss on it is mine. What a wonderful world, he avers. “Resistance is futile. You will be assimilated,” my own meat brain keeps hearing.
Whether you are among those who believe that the arrival of human-level AI signals the dawn of paradise, such as the technologist Ray Kurzweil, or the sunset of the age of humans, such as the prominent voices of the philosopher Nick Bostrom, the physicist Stephen Hawking and the entrepreneur Elon Musk, there is no question that AI will profoundly influence the fate of humanity.
There is one way to deal with this growing threat to our way of life. Instead of limiting further research into AI, we should turn it in an exciting new direction. To keep up with the machines we’re creating, we must move quickly to upgrade our own organic computing machines: We must create technologies to enhance the processing and learning capabilities of the human brain. . . .
Unlike say, the speed of light, there are no known theoretical limits to intelligence. While our brain’s computational power is more or less fixed by evolution, computers are constantly growing in power and flexibility. This is made possible by a vast ecosystem of several hundred thousand hardware and software engineers building on each other’s freely shared advances and discoveries. How can the human species keep up? . . .
In the face of this relentless onslaught, we must actively shape our future to avoid dystopia. We need to enhance our cognitive capabilities by directly intervening in our nervous systems.
We are already taking steps in this direction. . . .
My hope is that someday, a person could visualize a concept — say, the U.S. Constitution. An implant in his visual cortex would read this image, wirelessly access the relevant online Wikipedia page and then write its content back into the visual cortex, so that he can read the webpage with his mind’s eye. All of this would happen at the speed of thought. Another implant could translate a vague thought into a precise and error-free piece of digital code, turning anyone into a programmer.
People could set their brains to keep their focus on a task for hours on end, or control the length and depth of their sleep at will.
Another exciting prospect is melding two or more brains into a single conscious mind by direct neuron-to-neuron links — similar to the corpus callosum, the bundle of two hundred million fibers that link the two cortical hemispheres of a person’s brain. This entity could call upon the memories and skills of its member brains, but would act as one “group” consciousness, with a single, integrated purpose to coordinate highly complex activities across many bodies.
These ideas are compatible with everything we know about the brain and the mind. Turning them from science fiction into science fact requires a crash program to design safe, inexpensive, reliable and long-lasting devices and procedures for manipulating brain processes inside their protective shell. It must be focused on the end-to-end enhancement of human capabilities. . . .
While the 20th century was the century of physics — think the atomic bomb, the laser and the transistor — this will be the century of the brain. In particular, it will be the century of the human brain — the most complex piece of highly excitable matter in the known universe. It is within our reach to enhance it, to reach for something immensely powerful we can barely discern.
Full article: “To Keep Up with AI, We’ll Need High-Tech Brains” (You may or may not encounter a paywall)
Greetings, Teeming Brainers. I’m just peeking in from the digital wings, amid much ongoing blog silence, to observe that many of the issues and developments — sociocultural, technological, and more — that I began furiously tracking here way back in 2006 are continuing to head in pretty much the same direction. A case in point is provided by the alarming information, presented in a frankly alarmed tone, that appears in this new piece from Scientific American (originally published in SA’s German-language sister publication, Spektrum der Wissenschaft):
Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This information is based on personal and meta-data that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially, the identity of the user is protected, it can, in practice, be inferred quite easily. Today, algorithms know pretty well what we do, what we think and how we feel — possibly even better than our friends and family or even ourselves. Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled ever more successfully in this manner. The more is known about us, the less likely our choices are to be free and not predetermined by others.
But it won’t stop there. Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people. . . .
[I]t can be said that we are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society — for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path — a path that allows us all to benefit from the digital revolution
Oh, and for a concrete illustration of all the above, check this out:
How would behavioural and social control impact our lives? The concept of a Citizen Score, which is now being implemented in China, gives an idea. There, all citizens are rated on a one-dimensional ranking scale. Everything they do gives plus or minus points. This is not only aimed at mass surveillance. The score depends on an individual’s clicks on the Internet and their politically-correct conduct or not, and it determines their credit terms, their access to certain jobs, and travel visas. Therefore, the Citizen Score is about behavioural and social control. Even the behaviour of friends and acquaintances affects this score, i.e. the principle of clan liability is also applied: everyone becomes both a guardian of virtue and a kind of snooping informant, at the same time; unorthodox thinkers are isolated. Were similar principles to spread in democratic countries, it would be ultimately irrelevant whether it was the state or influential companies that set the rules. In both cases, the pillars of democracy would be directly threatened.
FULL ARTICLE: “Will Democracy Survive Big Data and Artificial Intelligence?”
Of course, none of this is real news to anybody who has been paying attention. It’s just something that people like me, and maybe like you, find troubling enough to highlight and comment on. And maybe, in the end, Cipher from The Matrix will turn out to have been right: Maybe ignorance really is bliss. Because from where I’m sitting, there doesn’t appear to be anything one can do to stop this streamrollering, metastasizing, runaway train-like dystopian trend. Talking about it is just that: talk. Which is one reason why I’ve lost a portion of the will that originally kept me blogging here for so many years. You can only play the role of Cassandra for so long before the intrinsic attraction begins to dissipate. Read the rest of this entry
The following is excerpted and adapted from the introduction to A Darke Phantastique: Encounters with the Uncanny and Other Magical Things, edited by Jason V Brock for Cycatrix Press. Jason’s full introduction is titled “An Abiding Darkness, A Phantastique Light.” The book also features a foreword by Ray Bradbury in the form of a previously unpublished 1951 essay titled “The Beginnings of Imagination.”
Why do we, as a species, create things? What is it to “create”? What is the purpose of such activity?
These are fascinating questions, and likely no one has a complete answer to them. However, from my vantage point, in its most essential form, creativity is making the divine out of the mundane. It is taking the fundamental life force of the human spirit and resolving that unfocused energy into something akin to the spiritual. (Sexuality is another example of this process, and is tied to creativity.)
Shamans were often catalysts of this in pre-religious contexts. In more organized societies, religion has attempted to channel energy of this nature with decidedly mixed results, often heaping upon the creative impulse the added burdens of castigation and humiliation, lest the individual attempt to take their (rightful) place amongst the gods. Just as one need not believe in a godhead to live a moral and righteous life, one can be a creative without the insufferable tyranny of an organized gathering of impotents taking umbrage at every word written, every stroke painted, every dish prepared, every frame captured. We are the authors of our lives and the masters of the final outcome, not the politicians or religious leaders of the moment.
Who are these individuals to dictate to us? How are they more able to advise us than any other person in the world, including ourselves? Certainly none of us needs a pope, a president, a lama, or a god to assist us in navigating any moral conviction; it is an innate function of socialization and reasoning. We have imbued such people with this ability; they are not actually illuming our existence. To understand this takes courage, passion, skill, talent, and inspiration. Otherwise we are all doomed, in the words of Thoreau, to lead “lives of quiet desperation.” And then the grave, followed by the unknown. Why not take one’s life and steer it, rather than listen to the protestations of less valiant persons hiding from the possible?
Other questions of interest to humanity — and to creators, especially in our science-driven, technologically dependent age — present themselves upon analysis: What is the fundamental nature of reality? Why are we alive? Are we alone in the universe? When does consciousness become non-artificial? If a humanoid (or non-human animal for that matter) has enough experience and wisdom to have insight, that means the threshold of insight has been crossed, which means the “artificial” aspects of Artificial Intelligence (simply programming data points or relying on input/output mechanisms) will have been breached. It isn’t artificial at that point. It just “is.”
“What is the fundamental nature of reality? Why are we alive? Are we alone in the universe? When does consciousness become non-artificial?”
Using that as an illustration, we realize that we are at an intriguing juncture as a world-changing species. When the first non-living organism begins to manifest actual sentience (as opposed to simple self-awareness), true emotions (not just programmed reactions), and is able, for example, to produce a profound work of art — a masterpiece of literature, painting, music, cinema, or the equivalent — then there will be no fundamental difference between “AI” and just plain garden-variety “I.” Once that happens, we will really have to examine the ethics of how we treat things that are neither born nor cultivated, but built for a purpose — something humanity struggles with now as it is related to non-human creatures and even to other humans based on sexuality, gender, and race, all of which are natural manifestations of DNA expression on Earth.
And indeed, what purpose is there to creating such a being? If we limit their life course to what “we decide” versus their own free will, isn’t that slavery? What if they are psychopathic and intentionally shut off the electrical grid to a hospital, for example, or commit an act of terrorism? Would that be a crime? I think it means we would need to reconsider many aspects of jurisprudence and mental health, for a start. Additionally, it is said that one learns more from failure than success, so does that mean that for higher levels of consciousness to be attained, AI must first have input from extremely negative learning experiences in order to garner enough data for such things as insight or empathy to manifest? Where does that lead? Uploading all the misery of the Holocaust? The horror of a cancer diagnosis? Deprivation due to the inability to see, hear, or speak, like Helen Keller?
And who are we to decide that these beings are mortals? (They could, technically, be immortals with the current technologies.) Are these prerequisites for such phenomena as the creation of emotionally moving artworks or philosophy, including knowledge of one’s own eventual death? Is immortality a good thing for humanity, either organic or manufactured?
I will address these concerns in Part 2, to be published soon.
In a word: wow. This new short film, released on July 30 and currently receiving enthusiastic praise all over the place, is a beautifully realized piece of short-form dystopian science fiction.
It tells the story of a near future in which, to quote the official press release, “a neurologist and two homicide detectives use experimental brain taping technology to question a murder victim about his final moments.” It stars Paul Reubens (who’s a joy to watch here in a dramatic role) as the neurologist, with the other roles filled by equally impressive actors.
The writer-director, acclaimed graphic novelist M. F. Wilson, invokes the idea of the Singularity, especially in its Kurzweilian iteration, as his main inspiration:
I was influenced by the theories of Ray Kurzweil on the Singularity and digital immortality and curious to see how the law will deal with the situations that arise from it. I’m excited about the idea of copying memories into code. Imagine that after your body dies, you can go on living in a digital state. This technology is in our near future and will challenge the very definition of life and death. It makes a great basis for a high-tech crime story…
Short of the Week offers a nice description of the film’s really impressive style, tone, and production quality:
Visually inspired by Fritz Lang’s Metropolis, one of the directors favourite science-fiction films, the dark, industrial aesthetics of The Final Moments of Karl Brant make the short feel like a cross between Blade Runner and Se7en. With Brett Pawlak’s cinematography, J.R. Hawbaker’s costume design and Level 256′s visual FX all using their extensive industry experience to paint a gritty and uncompromising vision of the future.
Enough with the preamble. Just watch.
Sounds like a science fiction idea, doesn’t it? Well, of course, it is a science fiction idea, and a venerable one at that, with roots that reach back to the early 19th century, when Mary Shelley processed the cultural fears and fascinations of an entire era by writing Frankenstein — an act which was, notably, inspired by a hideous nightmare, and which in turn inspired an apparently immortal cultural fascination (plays, movies, etc.) — all of which means the novel, with its ur-story of a human creation achieving consciousness and then turning on its creator, stands as an eruption from the unconscious mind.
(“Naturally, of course,” one might say, if one is aware of the deep roots of Western science and religion, which are on open display right there in the undisguised fact of Ms. Shelley’s direct inspiration by, on the one hand, Paradise Lost, and on the other hand, modern science’s emergence out of a crucible of quasi magical/mystical ideas with cultural roots predating the birth of civilization itself.)
But what happened earlier this year wasn’t fiction — or at least it wasn’t openly so. As reported by The New York Times on Saturday (“Scientists Worry Machines May Outsmart Man,” July 25), a group of computer scientists held a meeting in February, sponsored by the Association for the Advancement of Artificial Intelligence, to express and address authentic fears that “further advances [in AI] could create profound social disruptions and even have dangerous consequences.”
The Times article starts with this:
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
It goes on to report that most of the assembled researchers — “leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California” — said they don’t expect the creation of “highly centralized superintelligences” or the spontaneous eruption of artificial intelligence through the Internet, but they did agree “that robots that can kill autonomously are either already here or will be soon.”
The good news: We’re not even close to developing something like the HAL 9000 in 2001: A Space Odyssey.
The bad news: There is, right now, “legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.”
Here’s where I would suggest something to all interested parties: If you haven’t read Frankenstein, or haven’t read it for awhile, go back and brush up on it. Then read a good deal of the worthwhile literary and cultural criticism that has been produced about it and its legacy. Renowned science fiction author Brian Aldiss called the Frankenstein story “the first great myth of the industrial age.” Philosopher and culture critic Theodore Roszak, who for 40 years has been so apt at diagnosing many of our cultural ills, has called Frankenstein “the richest (and darkest) literary myth the culture of science has produced.” Joyce Carol Oates has characterized the novel itself as “a parable for our time, an enduring prophecy.”
This all means we may find some necessary guidance, or at least a warning, in the Frankenstein myth.
What I’m saying is simply this, to quote my own words from the concluding paragraph of a paper I wrote a few years ago that offers a reading of Frankenstein as a nihilistic parable about the fate of Western civilization:
We can find in Frankenstein a parable about what it means to commit ourselves to the quest for power over nature through scientific objectivity. One does not have to agree with Mary Shelley’s dire prognosis . . . . But I do think that we cannot afford to ignore “the first great myth of the industrial age,” “the central myth of western culture,” and I suspect that in the future, as we Westerners continue our journey through the dark night of psychic alienation in the urban-industrial technological landscape we have created, we may find ourselves turning more and more to it, in the form of further critical studies and additional literary and cinematic reworkings, as a subject for entertainment and reflection, and even guidance.
That paper won’t appear in my Dark Awakenings collection later this year (although it did appear in Penny Dreadful #14 in 2001), but given the dystopian SF-like nature of the report about AI scientists convening to share their fears, I think I’ll post the paper at my mattcardin.com Website when it’s fully built in the near future, since it looks at the philosophical and spiritual side of such developments.
In the meantime, for a not-so-spiritual but much more entertaining consideration of the same issues (more or less), please consider the following trailer for a movie that I still love after nearly 25 years, no matter how trashy it is: