Big Data, Artificial Intelligence, and Dehumanization: Surrendering to the Death of Democracy
Greetings, Teeming Brainers. I’m just peeking in from the digital wings, amid much ongoing blog silence, to observe that many of the issues and developments — sociocultural, technological, and more — that I began furiously tracking here way back in 2006 are continuing to head in pretty much the same direction. A case in point is provided by the alarming information, presented in a frankly alarmed tone, that appears in this new piece from Scientific American (originally published in SA’s German-language sister publication, Spektrum der Wissenschaft):
Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This information is based on personal and meta-data that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially, the identity of the user is protected, it can, in practice, be inferred quite easily. Today, algorithms know pretty well what we do, what we think and how we feel — possibly even better than our friends and family or even ourselves. Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled ever more successfully in this manner. The more is known about us, the less likely our choices are to be free and not predetermined by others.
But it won’t stop there. Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people. . . .
[I]t can be said that we are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society — for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path — a path that allows us all to benefit from the digital revolution
Oh, and for a concrete illustration of all the above, check this out:
How would behavioural and social control impact our lives? The concept of a Citizen Score, which is now being implemented in China, gives an idea. There, all citizens are rated on a one-dimensional ranking scale. Everything they do gives plus or minus points. This is not only aimed at mass surveillance. The score depends on an individual’s clicks on the Internet and their politically-correct conduct or not, and it determines their credit terms, their access to certain jobs, and travel visas. Therefore, the Citizen Score is about behavioural and social control. Even the behaviour of friends and acquaintances affects this score, i.e. the principle of clan liability is also applied: everyone becomes both a guardian of virtue and a kind of snooping informant, at the same time; unorthodox thinkers are isolated. Were similar principles to spread in democratic countries, it would be ultimately irrelevant whether it was the state or influential companies that set the rules. In both cases, the pillars of democracy would be directly threatened.
FULL ARTICLE: “Will Democracy Survive Big Data and Artificial Intelligence?”
Of course, none of this is real news to anybody who has been paying attention. It’s just something that people like me, and maybe like you, find troubling enough to highlight and comment on. And maybe, in the end, Cipher from The Matrix will turn out to have been right: Maybe ignorance really is bliss. Because from where I’m sitting, there doesn’t appear to be anything one can do to stop this streamrollering, metastasizing, runaway train-like dystopian trend. Talking about it is just that: talk. Which is one reason why I’ve lost a portion of the will that originally kept me blogging here for so many years. You can only play the role of Cassandra for so long before the intrinsic attraction begins to dissipate.
Lately the final passage of John David Ebert’s The New Media Invasion has been playing on my mind a lot. Writing about the trend identified in the book’s subtitle, “Digital Technologies and the World They Unmake,” Ebert concludes with this:
Thus, an entire world of Gutenbergian media is in process of becoming extinct — and this is currently experienced by literate types like myself as a major trauma — but the cognitive dialectic of art and technology is such that art will eventually heal the trauma inflicted upon our present psyches by these new technologies. It is a cultural healing process that is very much analogous to the grieving process of the human psyche when dealing with a major loss: eventually the psyche forgets, and in forgetting, ceases to hurt.
One day, we will wake up and wonder what it was we were ever worried about.
Reframe and expand Ebert’s discussion of our present apocalyptic media shift to encompass the totality of the digital revolution and all that it is bringing with it, including Big Data and A.I., and perhaps you’ll have a fitting epitaph for the world that’s being invaded and unmade: “What were we ever worried about?”
But then — and I keep coming back to this — if perhaps there’s some kind of absolute scale of value inherent in the nature of things (a notion that is unshakable for me), then it won’t matter if we somehow manage to psychologically “heal” into a happy acceptance of this new order, because maybe such healing will actually be a kind of ignorance, and not of the blissful sort but the dangerous and ultimately murderous and nightmarish sort. Maybe it will be like Howard Beale’s eventual capitulation, in Paddy Chayefsky’s and Sidney Lumet’s Network, to the inevitability of the robotic dehumanization that awaits us all in contemporary technocratic and technological society, against which he had been preaching and ranting tirelessly as “The Mad Prophet of the Airwaves”:
“Well, the time has come to say, is ‘dehumanization’ such a bad word? Whether it’s good or bad, that’s what is so. The whole world is becoming humanoid, creatures that look human but aren’t. The whole world, not just us. We’re just the most advanced country, so we’re getting there first. The whole world’s people are becoming mass-produced, programmed, numbered…”
And that was all the way back in 1976. Requiescat in pace, Brother Beale.
Posted on February 28, 2017, in Science & Technology, Society & Culture and tagged apocalypse watch, artificial intelligence, big data, Dystopia, john david ebert, network. Bookmark the permalink. 4 Comments.