Big Data, Artificial Intelligence, and Dehumanization: Surrendering to the Death of Democracy


Greetings, Teeming Brainers. I’m just peeking in from the digital wings, amid much ongoing blog silence, to observe that many of the issues and developments — sociocultural, technological, and more — that I began furiously tracking here way back in 2006 are continuing to head in pretty much the same direction. A case in point is provided by the alarming information, presented in a frankly alarmed tone, that appears in this new piece from Scientific American (originally published in SA’s German-language sister publication, Spektrum der Wissenschaft):

Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This information is based on personal and meta-data that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially, the identity of the user is protected, it can, in practice, be inferred quite easily. Today, algorithms know pretty well what we do, what we think and how we feel — possibly even better than our friends and family or even ourselves. Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled ever more successfully in this manner. The more is known about us, the less likely our choices are to be free and not predetermined by others.

But it won’t stop there. Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people. . . .

[I]t can be said that we are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society — for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path — a path that allows us all to benefit from the digital revolution

Oh, and for a concrete illustration of all the above, check this out:

How would behavioural and social control impact our lives? The concept of a Citizen Score, which is now being implemented in China, gives an idea. There, all citizens are rated on a one-dimensional ranking scale. Everything they do gives plus or minus points. This is not only aimed at mass surveillance. The score depends on an individual’s clicks on the Internet and their politically-correct conduct or not, and it determines their credit terms, their access to certain jobs, and travel visas. Therefore, the Citizen Score is about behavioural and social control. Even the behaviour of friends and acquaintances affects this score, i.e. the principle of clan liability is also applied: everyone becomes both a guardian of virtue and a kind of snooping informant, at the same time; unorthodox thinkers are isolated. Were similar principles to spread in democratic countries, it would be ultimately irrelevant whether it was the state or influential companies that set the rules. In both cases, the pillars of democracy would be directly threatened.

FULL ARTICLE: Will Democracy Survive Big Data and Artificial Intelligence?

Of course, none of this is real news to anybody who has been paying attention. It’s just something that people like me, and maybe like you, find troubling enough to highlight and comment on. And maybe, in the end, Cipher from The Matrix will turn out to have been right: Maybe ignorance really is bliss. Because from where I’m sitting, there doesn’t appear to be anything one can do to stop this streamrollering, metastasizing, runaway train-like dystopian trend. Talking about it is just that: talk. Which is one reason why I’ve lost a portion of the will that originally kept me blogging here for so many years. You can only play the role of Cassandra for so long before the intrinsic attraction begins to dissipate.

Lately the final passage of John David Ebert’s The New Media Invasion has been playing on my mind a lot. Writing about the trend identified in the book’s subtitle, “Digital Technologies and the World They Unmake,” Ebert concludes with this:

Thus, an entire world of Gutenbergian media is in process of becoming extinct — and this is currently experienced by literate types like myself as a major trauma — but the cognitive dialectic of art and technology is such that art will eventually heal the trauma inflicted upon our present psyches by these new technologies. It is a cultural healing process that is very much analogous to the grieving process of the human psyche when dealing with a major loss: eventually the psyche forgets, and in forgetting, ceases to hurt.

One day, we will wake up and wonder what it was we were ever worried about.

Reframe and expand Ebert’s discussion of our present apocalyptic media shift to encompass the totality of the digital revolution and all that it is bringing with it, including Big Data and A.I., and perhaps you’ll have a fitting epitaph for the world that’s being invaded and unmade: “What were we ever worried about?”

But then — and I keep coming back to this — if perhaps there’s some kind of absolute scale of value inherent in the nature of things (a notion that is unshakable for me), then it won’t matter if we somehow manage to psychologically “heal” into a happy acceptance of this new order, because maybe such healing will actually be a kind of ignorance, and not of the blissful sort but the dangerous and ultimately murderous and nightmarish sort. Maybe it will be like Howard Beale’s eventual capitulation, in Paddy Chayefsky’s and Sidney Lumet’s Network, to the inevitability of the robotic dehumanization that awaits us all in contemporary technocratic and technological society, against which he had been preaching and ranting tirelessly as “The Mad Prophet of the Airwaves”:

“Well, the time has come to say, is ‘dehumanization’ such a bad word? Whether it’s good or bad, that’s what is so. The whole world is becoming humanoid, creatures that look human but aren’t. The whole world, not just us. We’re just the most advanced country, so we’re getting there first. The whole world’s people are becoming mass-produced, programmed, numbered…”

And that was all the way back in 1976. Requiescat in pace, Brother Beale.


About Matt Cardin


Posted on February 28, 2017, in Science & Technology, Society & Culture and tagged , , , , , . Bookmark the permalink. 4 Comments.

  1. You don’t want to make the mistake of seeing these trends, such as the convergence of big data and AI as distinct events or silos. That would defeat the competitive advantages that the coming convergence would offer. We need to think differently as we used to say at Apple–convergence, co-evolution is smarter then one off or unrelated trends that represent one industry or market. The connectivity of trends is important to recognize.

  2. Great article Matt, good to see you on here again.
    You’re right, one day when this is all said and done, the people then may think it’s the best thing ever. They will be taught to look back to when humans were free thinking, self reliant, individuals as some kind of monstrous dark age they are happy to have missed.
    You can see that trend developing even now. As I’ve become older, more and more ask I looked at as “old fashioned” and uninformed because I see things differently or have different ideas.
    Good luck to you my friend, take care and keep writing, I always enjoy your posts.


  3. What if our present is the type of transformation humanity has experienced before? I’m specifically thinking of the Bronze Age collapse and the end of feudalism.

    Those were periods of mass disruption and conflict, related to large-scale societal changes in technology, culture, identity, etc. There is no doubt that humans were never the same again after those changes, but new ways of living, being, and relating replaced the old. Once that happened, few were capable of even suspecting it had once been different.

    It’s possible the changes afoot are even more vast than those earlier eras. That would be the case if were to experience genetic engineering that could forever alter the species on an even more fundamental level. And present climate change, mass extinctions, ecosystem collapse, and refugee crises could be a doozy like never before seen since civilization began, worse than the natural disasters that helped bring down the Bronze Age empires.

    It’s hard to know. Humans seem to have a talent for complete societal and psychological transformation. And every time it happens it involves shock and trauma, the effects of which slowly dissipate. The question is, after the present changes, what will be left if and when everything settles down. Anyway, we’ll never know because these kind of changes take at least centuries to settle down.

    • Benjamin – while I tend to agree with a lot of what you’ve said, your conclusion seems to ignore one of your strongest points.

      Societal change right now is happening at a pace we haven’t experienced before. For the first time in history we have the ability to create, through our own actions, mass extinctions, etc. I don’t think we’re adapting fast enough to deal with the actions we’ve already set in motion.

      The problem is, centuries from now there might be no one left to evaluate the current changes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.