Category Archives: Science & Technology
Apparently, working from home during the current disruption and suspension of all normal activities due to the COVID-19 pandemic is leaving me too much time and mental space for reflection. Please pardon me while I ill-advisedly correlate some contents and piece together some dissociated knowledge.
Bernardo Kastrup in Scientific American:
[A]s Kuhn pointed out, when enough “anomalies”—empirically undeniable observations that cannot be accommodated by the reigning belief system — accumulate over time and reach critical mass, paradigms change. We may be close to one such a defining moment today, as an increasing body of evidence from quantum mechanics (QM) renders the current paradigm [which holds that nature consists of arrangements of matter/energy outside and independent of mind] untenable. . . .
To reconcile [recent experimental] results with the current paradigm would require a profoundly counterintuitive redefinition of what we call “objectivity.” And since contemporary culture has come to associate objectivity with reality itself, the science press felt compelled to report on this by pronouncing, “Quantum physics says goodbye to reality.”
The tension between the anomalies and the current paradigm can only be tolerated by ignoring the anomalies. This has been possible so far because the anomalies are only observed in laboratories. Yet we know that they are there, for their existence has been confirmed beyond reasonable doubt. Therefore, when we believe that we see objects and events outside and independent of mind, we are wrong in at least some essential sense. A new paradigm is needed to accommodate and make sense of the anomalies; one wherein mind itself is understood to be the essence — cognitively but also physically — of what we perceive when we look at the world around ourselves.More: “Should Quantum Anomalies Make Us Rethink Reality?“
H. P. Lovecraft in “The Dreams in the Witch House”:
Possibly Gilman ought not to have studied so hard. Non-Euclidean calculus and quantum physics are enough to stretch any brain; and when one mixes them with folklore, and tries to trace a strange background of multi-dimensional reality behind the ghoulish hints of the Gothic tales and the wild whispers of the chimney-corner, one can hardly expect to be wholly free from mental tension.More: “The Dreams in the Witch House“
Lovecraft in The Dream-Quest of Unknown Kadath:
There were, in such voyages, incalculable local dangers; as well as that shocking final peril which gibbers unmentionably outside the ordered universe, where no dreams reach; that last amorphous blight of nethermost confusion which blasphemes and bubbles at the centre of all infinity — the boundless daemon-sultan Azathoth, whose name no lips dare speak aloud, and who gnaws hungrily in inconceivable, unlighted chambers beyond time amidst the muffled, maddening beating of vile drums and the thin, monotonous whine of accursed flutes; to which detestable pounding and piping dance slowly, awkwardly, and absurdly the gigantic ultimate gods, the blind, voiceless, tenebrous, mindless Other Gods whose soul and messenger is the crawling chaos Nyarlathotep.More: The Dream-Quest of Unknown Kadath
Me in “Teeth”:
On the subatomic level, I read, particles flash into and out of existence for no discernible reason, and the behavior of any single particle is apparently arbitrary and usually unpredictable. If there is a cause or “purpose” behind this behavior, then it is one that the human mind is, to all appearances, structurally prevented from comprehending. In other words, for all we know, the fundamental ruling principles at the most basic level of physical reality may well be what our minds and languages must necessarily label as “chaos” and “madness” . . . .
[W]hat is happening is in fact a profound and far-reaching reordering of reality itself — societal, cultural, personal, and even physical. In essence, the prophecies of Lovecraft and Nietzsche are coming true right before our eyes, with effects that are not only personal and cultural but ontological. Our excess of vast scientific knowledge and technological prowess has proceeded in lockstep with a collective descent into species-level insanity. You only have to watch two minutes of television, glance at a headline, or eavesdrop on a random conversation to learn of it. Ignorance and idiocy. Riots and revolutions. These and a thousand other signposts like them are only the most pointed and obvious manifestations of the all-pervasive malaise that has come to define us. And since, as Sankara observed, we are nothing but particularized manifestations of the Ground of Being itself, we are not only witnesses to this breakdown but participants in it, enablers of the transformation of the world into a vale of horror through the metaphysical potency of our very witnessing. God looks our through each of our eyes, an abyss of insatiable hunger and infinite teeth, and the dark light of His consciousness makes each of a lamp that illuminates a new and terrible truth.More: To Rouse Leviathan
Robert Kaplan, writing for The Washington Post:
It is impossible to imagine Trump and his repeated big lies that go viral except in the digital-video age. It is impossible to imagine our present political polarization except in the age of the Internet, which drives people to sites of extreme views that validate their preexisting prejudices. And, in the spirit of Hollywood, it is impossible to imagine the degree and intensity of emotional and sensory manipulation, false rumors, exaggerations and character assassination that decay our public dialogue except in this new and terrifying age of technology which has only just begun.
Digital-video technology, precisely because it is given to manipulation, is inherently controlling. Think of how the great film directors of the 20th century were able to take over your mind for a few hours: a new experience for audiences that previous generations had never known. Theater may be as old as the ancient Greeks, but the technology of film lent a new and powerful force to the theatrical experience. Moreover, it was contained within a limited time period, and afterward you came back to the real world.
In the 21st century, dictators may have the capability to be the equivalent of film directors, and the show never stops. Indeed, Joseph Goebbels would undoubtedly thrive in today’s world. As for warfare itself, it will be increasingly about dividing and demoralizing enemy populations through disinformation campaigns whose techniques are still in their infancy. . . .
Never before have we had to fight for democracy and individual rights as now in this new and — in some sense — dark age of technology. We must realize that the fight for democracy is synonymous with the fight for objectivity, which lies at the core of professional journalism — a calling whose foundational spirit was forged in the print-and-typewriter age, when mainly the movies were fake.
We will fight best by thinking tragically to avoid tragedy. This means learning to think like the tyrants who feed and prosper on misinformation so we can keep several steps ahead of them. Only in that way can we build safeguards against the specific dangers of the digital experience. The pioneers of Silicon Valley were inherent optimists who simply believed in connecting the world. But it is precisely such integration that provides our authoritarian enemies with access into our own democratic systems. The future will be about wars of integration rather than wars of geographic separation. So now constructive pessimism is called for. The innocent days when illusions were the province of movie stage sets are way behind us.
Full text: “Everything Here Is Fake“
The following insights are excerpted from a brief but engaging NPR piece that traces the cultural arc from Vint Cerf (the “inventor of the Internet”) and his early naive optimism about this new technology, to William Gibson’s uncanny prescience in forecasting exactly where the Internet would really take us (to a corporate-controlled cyberdystopia with sharply curtailed human relationships), to Black Mirror creator Charlie Brooker’s ongoing exploration of the darkest corners of the whole thing:
Initially, Cerf was trying to create an Internet through which scientists and academics from all over the world could share data and research. Then, one day in 1988, Cerf says he went to a conference for commercial vendors where they were selling products for the Internet. “I just stood there thinking, ‘My God! Somebody thinks they’re going to make money out of the Internet.’ ” Cerf was surprised and happy. “I was a big proponent of that. My friends in the community thought I was nuts. ‘Why would you let the unwashed masses get access to the Internet?’ And I said, ‘Because I want everybody to take advantage of its capability.’ ”
Clearly, Cerf is an optimist. That is what allowed him to dream big. But, in retrospect, some of the decisions his team made seem hopelessly naive, especially for a bunch of geniuses. They made it possible to surf the Internet anonymously — unlike a telephone, you don’t have a unique number that announces who you are. We know how that turned out. People with less lofty ambitions than Cerf used that loophole for cybercrime, international espionage and online harassment.
Cerf admits all that dark stuff never crossed his mind. “And we have to cope with that — I mean, welcome to the real world,” he says. . . .
Somehow [William] Gibson was able to imagine the potential scale of it — all those computers connected together. . . . But, it isn’t just the Internet that Gibson saw coming. In Neuromancer, the Internet has become dominated by huge multinational corporations fighting off hackers. The main character is a washed-up criminal hacker who goes to work for an ex-military officer to regain his glory. And get this: The ex-military guy is deeply involved in cyber-espionage between the U.S. and Russia.
Gibson says he didn’t need to try a computer or see the Internet to imagine this future. “The first people to embrace a technology are the first to lose the ability to see it objectively,” he says. He says he’s more interested in how people behave around new technologies. He likes to tell a story about how TV changed New York City neighborhoods in the 1940s. “Fewer people sat out on the stoops at night and talked to their neighbors, and it was because everyone was inside watching television,” he says. “No one really noticed it at the time as a kind of epochal event, which I think it was” . . . .
Brooker has a certain amount of frustration with the leaders in tech. “It’s felt like tech companies have for years just put this stuff out there,” he says. “And they distance themselves from the effects of their product effectively by saying, ‘Oh, we’re just offering a service.’ ” Brooker sees each new technology more like an untested drug waiting to launch us on a very bad trip. Each episode of Black Mirror is like its own laboratory testing a technology that is already out, but pushing it by mixing in common human behaviors and desires.
Here’s renowned neuroscientist Christopher Koch explaining in a Wall Street Journal piece that our future will be a dystopian nightmare in which humans will necessarily become ever more completely fused on a neurological level with super sophisticated computer technologies. This will, he says, be a non-negotiable requirement if we want to keep up with the artificial intelligences that will be billions of times smarter than us, and that will otherwise utterly rule humanity and pose an existential threat to us in all kinds of ways that we, with our currently unenhanced meat brains, can hardly imagine.
Or actually, Koch speaks not grimly but enthusiastically of this future (and semi-present) scenario. He views the technological enhancement of the human brain for purposes of keeping pace with AI as an exciting thing. The negative gloss on it is mine. What a wonderful world, he avers. “Resistance is futile. You will be assimilated,” my own meat brain keeps hearing.
Whether you are among those who believe that the arrival of human-level AI signals the dawn of paradise, such as the technologist Ray Kurzweil, or the sunset of the age of humans, such as the prominent voices of the philosopher Nick Bostrom, the physicist Stephen Hawking and the entrepreneur Elon Musk, there is no question that AI will profoundly influence the fate of humanity.
There is one way to deal with this growing threat to our way of life. Instead of limiting further research into AI, we should turn it in an exciting new direction. To keep up with the machines we’re creating, we must move quickly to upgrade our own organic computing machines: We must create technologies to enhance the processing and learning capabilities of the human brain. . . .
Unlike say, the speed of light, there are no known theoretical limits to intelligence. While our brain’s computational power is more or less fixed by evolution, computers are constantly growing in power and flexibility. This is made possible by a vast ecosystem of several hundred thousand hardware and software engineers building on each other’s freely shared advances and discoveries. How can the human species keep up? . . .
In the face of this relentless onslaught, we must actively shape our future to avoid dystopia. We need to enhance our cognitive capabilities by directly intervening in our nervous systems.
We are already taking steps in this direction. . . .
My hope is that someday, a person could visualize a concept — say, the U.S. Constitution. An implant in his visual cortex would read this image, wirelessly access the relevant online Wikipedia page and then write its content back into the visual cortex, so that he can read the webpage with his mind’s eye. All of this would happen at the speed of thought. Another implant could translate a vague thought into a precise and error-free piece of digital code, turning anyone into a programmer.
People could set their brains to keep their focus on a task for hours on end, or control the length and depth of their sleep at will.
Another exciting prospect is melding two or more brains into a single conscious mind by direct neuron-to-neuron links — similar to the corpus callosum, the bundle of two hundred million fibers that link the two cortical hemispheres of a person’s brain. This entity could call upon the memories and skills of its member brains, but would act as one “group” consciousness, with a single, integrated purpose to coordinate highly complex activities across many bodies.
These ideas are compatible with everything we know about the brain and the mind. Turning them from science fiction into science fact requires a crash program to design safe, inexpensive, reliable and long-lasting devices and procedures for manipulating brain processes inside their protective shell. It must be focused on the end-to-end enhancement of human capabilities. . . .
While the 20th century was the century of physics — think the atomic bomb, the laser and the transistor — this will be the century of the brain. In particular, it will be the century of the human brain — the most complex piece of highly excitable matter in the known universe. It is within our reach to enhance it, to reach for something immensely powerful we can barely discern.
Full article: “To Keep Up with AI, We’ll Need High-Tech Brains” (You may or may not encounter a paywall)
NYU marketing professor Scott Galloway, writing for Esquire:
Our brains are sophisticated enough to ask very complex questions but not sophisticated enough to answer them. Since Homo sapiens emerged from caves, we’ve relied on prayer to address that gap: We lift our gaze to the heavens, send up a question, and wait for a response from a more intelligent being. “Will my kid be all right?” “Who might attack us?”
As Western nations become wealthier, organized religion plays a smaller role in our lives. But the void between questions and answers remains, creating an opportunity. As more and more people become alienated from traditional religion, we look to Google as our immediate, all-knowing oracle of answers from trivial to profound. Google is our modern-day god. Google appeals to the brain, offering knowledge to everyone, regardless of background or education level. If you have a smartphone or an Internet connection, your prayers will always be answered: “Will my kid be all right?” “Symptoms and treatment of croup. . .” “Who might attack us?” “Nations with active nuclear-weapons programs . . .”
Think back on every fear, every hope, every desire you’ve confessed to Google’s search box and then ask yourself: Is there any entity you’ve trusted more with your secrets? Does anybody know you better than Google?
Full article: “Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking Machine“
Image Credit: Kavinmecx (Own work) [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons
At the beginning of each semester I tell my students the very thing that journalist Zat Rana gets at in a recent article for Quartz when I deliver a mini-sermon about my complete ban on phones — and also, for almost all purposes, laptops — in my classroom. A smartphone or almost any cell phone in your hand, on your desk, or even in your pocket as you’re trying to concentrate on important other things is a vampire demon powered by dystopian corporate overlords whose sole purpose is to suck your soul by siphoning away your attention and immersing you in a portable customized Matrix.
Or as Rana says, in less metaphorical language:
One of the biggest problems of our generation is that while the ability to manage our attention is becoming increasingly valuable, the world around us is being designed to steal away as much of it as possible….Companies like Google and Facebook aren’t just creating products anymore. They’re building ecosystems. And the most effective way to monetize an ecosystem is to begin with engagement. It’s by designing their features to ensure that we give up as much of our attention as possible.
Rana offers three pieces of sound advice for helping to reclaim your attention (which is the asset referred to in the title): mindfulness meditation, “ruthless single-tasking,” and regular periods of deliberate detachment from the digital world.
Interestingly, it looks like there’s a mini-wave of this type of awareness building in the mediasphere. Rana’s article for Quartz was published on October 2. Four days later The Guardian published a provocative and alarming piece with this teaser: “Google, Twitter and Facebook workers who helped make technology so addictive are disconnecting themselves from the internet. Paul Lewis reports on the Silicon Valley refuseniks alarmed by a race for human attention.” It’s a very long and in-depth article. Here’s a taste:
There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity — even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
But those concerns are trivial compared with the devastating impact upon the political system that some of Rosenstein’s peers believe can be attributed to the rise of social media and the attention-based market that drives it. . . .
Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. . . .
“The dynamics of the attention economy are structurally set up to undermine the human will,” [ex-Google strategist James Williams] says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?
“Will we be able to recognise it, if and when it happens?” Williams replies. “And if we can’t, then how do we know it hasn’t happened already?”
In the same vein, Nicholas Carr (no stranger to The Teeming Brain’s pages) published a similarly aimed — and even a similarly titled — essay in the Weekend Review section of The Wall Street Journal on the very day the Guardian article appeared (October 6). “Research suggests that as the brain grows dependent on phone technology, the intellect weakens,” says the teaser. Here’s a representative passage from the essay itself:
Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object in the environment that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.” Media and communication devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.
But even in the history of captivating media, the smartphone stands out. It’s an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what [Adrian] Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it’s part of the surroundings — which it always is. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library, and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That’s what a smartphone represents to us. No wonder we can’t take our minds off it.
Full Text: “How Smartphones Hijack Our Minds“
At his blog Carr noted the simultaneous appearance of his essay and the Guardian article on the same day. He also noted the striking coincidence of the similarity between the titles, calling it a “telling coincidence” and commenting:
It’s been clear for some time that smartphones and social-media apps are powerful distraction machines. They routinely divide our attention. But the “hijack” metaphor — I took it from Adrian Ward’s article “Supernormal” — implies a phenomenon greater and more insidious than distraction. To hijack something is to seize control of it from its rightful owner. What’s up for grabs is your mind.
Perhaps the most astonishing thing about all of this is that John Carpenter warned us about it three decades ago, and not vaguely, but quite specifically and pointedly. The only difference is that the technology in his (quasi-)fictional presentation was television. Well, that, plus the fact that his evil overlords really were ill-intentioned, whereas ours may be in some cases as much victims of their own devices as we are. In any event:
There is a signal broadcast every second of every day through our television sets. Even when the set is turned off, the brain receives the input. . . . Our impulses are being redirected. We are living in an artificially induced state of consciousness that resembles sleep. . . . We have been lulled into a trance.
From an essay by Ed Finn, founding director of the Center for Science and the Imagination at Arizona State University:
We are all centaurs now, our aesthetics continuously enhanced by computation. Every photograph I take on my smartphone is silently improved by algorithms the second after I take it. Every document autocorrected, every digital file optimised. Musicians complain about the death of competence in the wake of Auto-Tune, just as they did in the wake of the synthesiser in the 1970s. It is difficult to think of a medium where creative practice has not been thoroughly transformed by computation and an attendant series of optimisations. . . .
Today, we experience art in collaboration with these algorithms. How can we disentangle the book critic, say, from the highly personalised algorithms managing her notes, communications, browsing history and filtered feeds on Facebook and Instagram? . . . .
The immediate creative consequence of this sea change is that we are building more technical competence into our tools. It is getting harder to take a really terrible digital photograph, and in correlation the average quality of photographs is rising. From automated essay critiques to algorithms that advise people on fashion errors and coordinating outfits, computation is changing aesthetics. When every art has its Auto-Tune, how will we distinguish great beauty from an increasingly perfect average? . . .
We are starting to perceive the world through computational filters, perhaps even organising our lives around the perfect selfie or defining our aesthetic worth around the endorsements of computationally mediated ‘friends’. . . .
Human creativity has always been a response to the immense strangeness of reality, and now its subject has evolved, as reality becomes increasingly codeterminate, and intermingled, with computation. If that statement seems extreme, consider the extent to which our fundamental perceptions of reality — from research in the physical sciences to finance to the little screens we constantly interject between ourselves in the world — have changed what it means to live, to feel, to know. As creators and appreciators of the arts, we would do well to remember all the things that Google does not know.
FULL TEXT: “Art by Algorithm“
Recently Daryl Bem defended his famous research into precognition in a letter to The Chronicle of Higher Education. More recently, as in this week, Salon published a major piece about Bem and his research that delves deeply into its implications for the whole of contemporary science — especially psychology and the other social sciences (or “social sciences”), but also the wider of world of science in general — and shows how Bem’s research, and the reactions to it, have highlighted, underscored, and called out some very serious problems:
Bem’s 10-year investigation, his nine experiments, his thousand subjects—all of it would have to be taken seriously. He’d shown, with more rigor than anyone ever had before, that it might be possible to see into the future. Bem knew his research would not convince the die-hard skeptics. But he also knew it couldn’t be ignored.
When the study went public, about six months later, some of Bem’s colleagues guessed it was a hoax. Other scholars, those who believed in ESP — theirs is a small but fervent field of study — saw his paper as validation of their work and a chance for mainstream credibility.
But for most observers, at least the mainstream ones, the paper posed a very difficult dilemma. It was both methodologically sound and logically insane. Daryl Bem had seemed to prove that time can flow in two directions — that ESP is real. If you bought into those results, you’d be admitting that much of what you understood about the universe was wrong. If you rejected them, you’d be admitting something almost as momentous: that the standard methods of psychology cannot be trusted, and that much of what gets published in the field — and thus, much of what we think we understand about the mind — could be total bunk.
If one had to choose a single moment that set off the “replication crisis” in psychology — an event that nudged the discipline into its present and anarchic state, where even textbook findings have been cast in doubt — this might be it: the publication, in early 2011, of Daryl Bem’s experiments on second sight.
The replication crisis as it’s understood today may yet prove to be a passing worry or else a mild problem calling for a soft corrective. It might also grow and spread in years to come, flaring from the social sciences into other disciplines, burning trails of cinder through medicine, neuroscience, and chemistry. It’s hard to see into the future. But here’s one thing we can say about the past: The final research project of Bem’s career landed like an ember in the underbrush and set his field ablaze. . . .
When Bem started investigating ESP, he realized the details of his research methods would be scrutinized with far more care than they had been before. In the years since his work was published, those higher standards have increasingly applied to a broad range of research, not just studies of the paranormal. “I get more credit for having started the revolution in questioning mainstream psychological methods than I deserve,” Bem told me. “I was in the right place at the right time. The groundwork was already pre-prepared, and I just made it all startlingly clear.”
Looking back, however, his research offered something more than a vivid illustration of problems in the field of psychology. It opened up a platform for discussion. Bem hadn’t simply published a set of inconceivable findings; he’d done so in a way that explicitly invited introspection. In his paper proving ESP is real, Bem used the word replication 33 times. Even as he made the claim for precognition, he pleaded for its review.
“Credit to Daryl Bem himself,” [University of California-Berkeley business school professor] Leif Nelson told me. “He’s such a smart, interesting man. . . . In that paper, he actively encouraged replication in a way that no one ever does. He said, ‘This is an extraordinary claim, so we need to be open with our procedures.’ . . . It was a prompt for skepticism and action.”
Bem meant to satisfy the skeptics, but in the end he did the opposite: He energized their doubts and helped incite a dawning revolution. Yet again, one of the world’s leading social psychologists had made a lasting contribution and influenced his peers. “I’m sort of proud of that,” Bem conceded at the end of our conversation. “But I’d rather they started to believe in psi as well. I’d rather they remember my work for the ideas.”
Note that the article also contains, in its middle section, a fascinating personal profile and mini-biography of Bem himself, including a recounting of his life-long interest in mentalism, which began in his teen years and persisted into his career in academia:
As a young professor at Carnegie Mellon University, Bem liked to close out each semester by performing as a mentalist. After putting on his show, he’d tell his students that he didn’t really have ESP. In class, he also stressed how easily people can be fooled into believing they’ve witnessed paranormal phenomena.
The following paragraphs are from a talk delivered by Pinboard founder Maciej Cegłowski at the recent Emerging Technologies for the Enterprise conference in Philadelphia. Citing as Exhibit A the colossal train wreck that was the 2016 American presidential election, Cegłowski basically explains how, in the current version of the Internet that has emerged over the past decade-plus, we have collectively created a technology that is perfectly calibrated for undermining Western democratic societies and ideals.
But as incisive as his analysis is, I seriously doubt that his (equally incisive) proposed solutions, described later in the piece, will ever be implemented to any meaningful extent. I mean, if we’re going to employ the explicitly Frankensteinian metaphor of “building a monster,” then it’s important to bear in mind that Victor Frankenstein and his wretched creation did not find their way to anything resembling a happy ending. (And note that Cegłowski himself acknowledges as much at the end of his piece when he closes his discussion of proposed solutions by asserting that “even though we’re likely to fail, all we can do is try.”)
This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things,” some of what we knocked down were the load-bearing walls of our democracy. . . .
A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 [sic] election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea. . . .
We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we’re good people. We like freedom. How could we have built tools that subvert it? . . .
The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills. . . .
Orwell imagined a world in which the state could shamelessly rewrite the past. The Internet has taught us that people are happy to do this work themselves, provided they have their peer group with them, and a common enemy to unite against. They will happily construct alternative realities for themselves, and adjust them as necessary to fit the changing facts . . . .
A lot of what we call “disruption” in the tech industry has just been killing flawed but established institutions, and mining them for parts. When we do this, we make a dangerous assumption about our ability to undo our own bad decisions, or the time span required to build institutions that match the needs of new realities.
Right now, a small caste of programmers is in charge of the surveillance economy, and has broad latitude to change it. But this situation will not last for long. The kinds of black-box machine learning that have been so successful in the age of mass surveillance are going to become commoditized and will no longer require skilled artisans to deploy. . . .
Unless something happens to mobilize the tech workforce, or unless the advertising bubble finally bursts, we can expect the weird, topsy-turvy status quo of 2017 to solidify into the new reality.