At the beginning of each semester I tell my students the very thing that journalist Zat Rana gets at in a recent article for Quartz when I deliver a mini-sermon about my complete ban on phones — and also, for almost all purposes, laptops — in my classroom. A smartphone or almost any cell phone in your hand, on your desk, or even in your pocket as you’re trying to concentrate on important other things is a vampire demon powered by dystopian corporate overlords whose sole purpose is to suck your soul by siphoning away your attention and immersing you in a portable customized Matrix.
Or as Rana says, in less metaphorical language:
One of the biggest problems of our generation is that while the ability to manage our attention is becoming increasingly valuable, the world around us is being designed to steal away as much of it as possible….Companies like Google and Facebook aren’t just creating products anymore. They’re building ecosystems. And the most effective way to monetize an ecosystem is to begin with engagement. It’s by designing their features to ensure that we give up as much of our attention as possible.
Rana offers three pieces of sound advice for helping to reclaim your attention (which is the asset referred to in the title): mindfulness meditation, “ruthless single-tasking,” and regular periods of deliberate detachment from the digital world.
Interestingly, it looks like there’s a mini-wave of this type of awareness building in the mediasphere. Rana’s article for Quartz was published on October 2. Four days later The Guardian published a provocative and alarming piece with this teaser: “Google, Twitter and Facebook workers who helped make technology so addictive are disconnecting themselves from the internet. Paul Lewis reports on the Silicon Valley refuseniks alarmed by a race for human attention.” It’s a very long and in-depth article. Here’s a taste:
There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity — even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
But those concerns are trivial compared with the devastating impact upon the political system that some of Rosenstein’s peers believe can be attributed to the rise of social media and the attention-based market that drives it. . . .
Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. . . .
“The dynamics of the attention economy are structurally set up to undermine the human will,” [ex-Google strategist James Williams] says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?
“Will we be able to recognise it, if and when it happens?” Williams replies. “And if we can’t, then how do we know it hasn’t happened already?”
In the same vein, Nicholas Carr (no stranger to The Teeming Brain’s pages) published a similarly aimed — and even a similarly titled — essay in the Weekend Review section of The Wall Street Journal on the very day the Guardian article appeared (October 6). “Research suggests that as the brain grows dependent on phone technology, the intellect weakens,” says the teaser. Here’s a representative passage from the essay itself:
Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object in the environment that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.” Media and communication devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.
But even in the history of captivating media, the smartphone stands out. It’s an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what [Adrian] Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it’s part of the surroundings — which it always is. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library, and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That’s what a smartphone represents to us. No wonder we can’t take our minds off it.
Full Text: “How Smartphones Hijack Our Minds“
At his blog Carr noted the simultaneous appearance of his essay and the Guardian article on the same day. He also noted the striking coincidence of the similarity between the titles, calling it a “telling coincidence” and commenting:
It’s been clear for some time that smartphones and social-media apps are powerful distraction machines. They routinely divide our attention. But the “hijack” metaphor — I took it from Adrian Ward’s article “Supernormal” — implies a phenomenon greater and more insidious than distraction. To hijack something is to seize control of it from its rightful owner. What’s up for grabs is your mind.
Perhaps the most astonishing thing about all of this is that John Carpenter warned us about it three decades ago, and not vaguely, but quite specifically and pointedly. The only difference is that the technology in his (quasi-)fictional presentation was television. Well, that, plus the fact that his evil overlords really were ill-intentioned, whereas ours may be in some cases as much victims of their own devices as we are. In any event:
There is a signal broadcast every second of every day through our television sets. Even when the set is turned off, the brain receives the input. . . . Our impulses are being redirected. We are living in an artificially induced state of consciousness that resembles sleep. . . . We have been lulled into a trance.
From an essay by Ed Finn, founding director of the Center for Science and the Imagination at Arizona State University:
We are all centaurs now, our aesthetics continuously enhanced by computation. Every photograph I take on my smartphone is silently improved by algorithms the second after I take it. Every document autocorrected, every digital file optimised. Musicians complain about the death of competence in the wake of Auto-Tune, just as they did in the wake of the synthesiser in the 1970s. It is difficult to think of a medium where creative practice has not been thoroughly transformed by computation and an attendant series of optimisations. . . .
Today, we experience art in collaboration with these algorithms. How can we disentangle the book critic, say, from the highly personalised algorithms managing her notes, communications, browsing history and filtered feeds on Facebook and Instagram? . . . .
The immediate creative consequence of this sea change is that we are building more technical competence into our tools. It is getting harder to take a really terrible digital photograph, and in correlation the average quality of photographs is rising. From automated essay critiques to algorithms that advise people on fashion errors and coordinating outfits, computation is changing aesthetics. When every art has its Auto-Tune, how will we distinguish great beauty from an increasingly perfect average? . . .
We are starting to perceive the world through computational filters, perhaps even organising our lives around the perfect selfie or defining our aesthetic worth around the endorsements of computationally mediated ‘friends’. . . .
Human creativity has always been a response to the immense strangeness of reality, and now its subject has evolved, as reality becomes increasingly codeterminate, and intermingled, with computation. If that statement seems extreme, consider the extent to which our fundamental perceptions of reality — from research in the physical sciences to finance to the little screens we constantly interject between ourselves in the world — have changed what it means to live, to feel, to know. As creators and appreciators of the arts, we would do well to remember all the things that Google does not know.
FULL TEXT: “Art by Algorithm“
A recently published essay by University of Virginia professor Chad Wellmon in The Hedgehog Review stands as one of the most elegant, incisive, and persuasive entries I’ve yet read in the great debate over the effects of the Internet/digital media revolution on human consciousness and culture. And I’ve read a fair amount of them.
On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power of Facebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge…[They say] Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information. Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarter and ultimately liberate us. These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what were once the historical limits of humanity: physical, intellectual, and psychological
You would have had to be hiding under the proverbial rock in order to avoid hearing about the concept of the “filter bubble” in the past year. It comes from peace activist and MoveOn.org cofounder Eli Pariser’s 2011 book The Filter Bubble: What the Internet Is Hiding from You. The basic idea is that the rise of “personalization” in Internet searches — the tendency of Google and Facebook and Netflix and other prominent online services to use complex algorithms to gradually tailor search results to the perceived preferences of each user — ends up blocking out the true fullness and richness of the world of information and ideas. “Search engines weight our search results to our own preferences. (My search results won’t look like yours.) Sites will filter our news (without asking us) to bring us what they think we want,” a reviewer for The Christian Science Monitor summarizes. “The consequences of this social engineering, Pariser argues, is that we interact more with people who think like we do. Rather than fulfilling the early Internet dreams of diversity and freedom of choice, we are living in an echo chamber.” Read the rest of this entry
Here at The Teeming Brain I’ve gone on at some length about the disastrous/dystopian trends in contemporary American education, including, especially, the rise of the techno-corporate consumer model that assigns a purely economic raison d’etre to higher education. (See, for example, my “America’s Colleges at a Crossroads” series and additional articles.) Today I’m fascinated, and rather psyched, to discover a smart and forceful statement in favor of pursuing a humanities-oriented education, written by somebody who earned a grad degree at MIT and then launched into a lucrative career in computer programming, only to abandon it a few years later to earn a Ph.D. in philosophy because his technological interests organically led him to a passionate personal focus on philosophical matters.
Damon Horowitz’s bio says he “is currently in-house philosopher at Google” — an intriguing job title if ever I heard one — and his essay published yesterday (July 17) at The Chronicle of Higher Education is described as “an excerpt of a keynote address he gave in the spring at the BiblioTech conference at Stanford University.” A quick Google search reveals that the address itself was titled “Why You Should Quit Your Technology Job and Get a Humanities Ph.D.”
Here are some choice highlights from a highlight-filled essay that’s quotable almost in toto:
I wanted to better understand what it was about how we were defining intelligence that was leading us astray: What were we failing to understand about the nature of thought in our attempts to build thinking machines? And, slowly, I realized that the questions I was asking were philosophical questions — about the nature of thought, the structure of language, the grounds of meaning. So if I really hoped to make major progress in AI, the best place to do this wouldn’t be another AI lab. If I really wanted to build a better thinker, I should go study philosophy.
In learning the limits of my technologist worldview, I didn’t just get a few handy ideas about how to build better AI systems. My studies opened up a new outlook on the world. I would unapologetically characterize it as a personal intellectual transformation: a renewed appreciation for the elements of life that are not scientifically understood or technologically engineered.
In other words: I became a humanist.
Maybe you, too, are disposed toward critical thinking. Maybe, despite the comfort and security that your job offers, you, too, have noticed cracks in the technotopian bubble.
Maybe you are worn out by endless marketing platitudes about the endless benefits of your products; and you’re not entirely at ease with your contribution to the broader culture industry. Maybe you are unsatisfied by oversimplifications in the product itself. What exactly is the relationship created by “friending” someone online? How can your online profile capture the full glory of your performance of self? Maybe you are cautious about the impact of technology. You are startled that our social-entertainment Web sites are playing crucial roles in global revolutions. You wonder whether those new tools, like any weapons, can be used for evil as well as good, and you are reluctant to engage in the cultural imperialism that distribution of a technology arguably entails.
[D]o you really value your mortgage more than the life of the mind? What is the point of a comfortable living if you don’t know what the humanities have taught us about living well? If you already have a job in the technology industry, you are already significantly more wealthy than the vast majority of our planet’s population. You already have enough.
If you are worried about your career, I must tell you that getting a humanities Ph.D. is not only not a danger to your employability, it is quite the opposite. I believe there no surer path to leaping dramatically forward in your career than to earn a Ph.D. in the humanities. Because the thought leaders in our industry are not the ones who plodded dully, step by step, up the career ladder. The leaders are the ones who took chances and developed unique perspectives.
Complete text at The Chronicle‘s site: “From Technologist to Philosopher”
What’s more, Horowitz’s speech is on Youtube:
The Google Effect: New evidence of the Internet’s impact on brain and memory recalls Plato’s ancient warning
It’s not every day you get to note/observe/say something like this: A 2400-year-old warning from Plato has just been confirmed, or at least inadvertently recalled, by newly published research about the cognitive and neurological effects of our now-ubiquitous culture of Internet searching.
Here’s the lowdown:
Researchers at Columbia University. . . say Google and its search-engine brethren have started to reshape your brain, making you more likely to forget information that is only a quick Internet search away. The new research, published in Science magazine, suggests that people are adapting to the very existence of search engines. For most of us, the thinking goes, the “what” isn’t what matters now; it’s the “where,” as in where can you find the information.
In a series of experiments, researchers found that subjects “were significantly more likely to remember information if they thought they would not be able to find it later,” as the New York Times puts it. When subjects were given information and folder names in which the info was stored in one test, they were more likely to recall the folder names than the information itself. The researchers, naturally, have coined a term for this development: the “Google effect.”
The Internet has become “an external memory source that we can access at any time,” Betsy Sparrow, the study’s principle researcher, explains on Columbia’s website.
— “Study Shows Internet Alters Memory,” Christina Gossmann, Slate, July 15, 2011
And here’s the Plato connection: In the Phaedrus, a dialogue written circa 370 B.C.E., Plato depicted his teacher Socrates telling the story of Thamus, a great Egyptian king who once entertained the god Theuth, inventor of mathematics, astronomy, and many other such things, including, most famously, writing. (Obviously, Theuth is probably Plato/Socrates’ variation on Thoth or Hermes.) Theuth showed Thamus many of his inventions, and Thamus praised them all. But Theuth was especially proud of his invention of writing, and he introduced it to Thamus by saying, “Here is an accomplishment, my lord the King, which will improve both the wisdom and the memory of the Egyptians. I have discovered a sure receipt for memory and wisdom.”
In a needle-scratching response that clashes jaggedly with our modern-day cultural assumptions about the supreme intellectual value of writing and literacy, Thamus vigorously disagreed that writing was a good thing. The wording of his reply makes it sound like he was peering through a wormhole into the 21st century and reading the new Columbia University report:
You, who are the father of writing, have out of fondness for your offspring attributed to it quite the opposite of its real function. Those who acquire it will cease to exercise their memory and become forgetful; they will rely on writing to bring things to their remembrance by external signs instead of by their own internal resources. What you have discovered is a receipt for recollection, not for memory.
— Phaedrus, trans. Walter Hamilton
So what’s to make of this? For one thing, it’s instructive to consider the possible negative effects of the phenomenon in question. Nicole Ferraro, writing for Internet Evolution, offers these thoughts on the Columbia study’s findings:
To tell the truth, we’ve done this to ourselves: Why know directions when we can get turn-by-turn directions on our iPhones? Why remember someone’s email address when Gmail is going to produce it automatically when we begin typing letters? I mean, why remember any fact that can easily be pulled up on the search engines we carry in our pockets? And how are we expected to remember information when we’re consuming so much at once and jumping from task to task?
So, yes, this was bound to happen. But what are the implications of this? Would you agree with the headline on this Register article about the same study?: “Google turning us into forgetful morons.”
— “Redefining ‘Knowledge’ in the Age of Google,” Nicole Ferraro, Internet Evolution, July 15, 2011
But Ferraro’s concerns about forgotten directions and phone numbers and email addresses pale in comparison to the dire moral/intellectual/social prognosis that Thamus drew from his insight:
And as for wisdom, your pupils will have the reputation for it without the reality: they will receive a quantity of information without proper instructions, and in consequence be thought very knowledgeable when they are for the most part quite ignorant. And because they are filled with the conceit of wisdom instead of real wisdom they will be a burden to society.
The aforementioned Betsy Sparrow, head of the Columbia University research team, told The New York Times “her experiments had led her to conclude that the Internet has become our primary external storage system. ‘Human memory,’ she said, ‘is adapting to new communications technology.'” Inspired by Thamus, Plato, and all of the above, we might pause to notice the troubling conundrum built into the very idea of an “external storage medium” for the human mind. Don’t get me wrong, I’m not knocking literacy. There’s obviously a very strong case to be made for the idea that it’s precisely our development of external storage media — books and so on, and now the Internet — that has allowed us to accomplish so many of the great things that we as a species have accomplished, and that the very definition of wisdom must now involve literacy, and not just bare-bones reading and writing skills as promoted by godawful social engineering programs like No Child Left Behind, but a foundational knowledge of the great things that have been thought, said, and written in the past, along with — to ping another aspect of the Columbia report — the increasingly important ability to access information accurately. Profound forgetfulness of the past, now preserved in writing and increasingly in digital form, is the very definition of a dystopian dark age.
But that said, we’ll all be well-advised to keep an eye and ear on the judgment of Thamus as we live our way inevitably into the Brave New World of our collective cyberfuture. A crucial aspect of authentic wisdom is the ability, and more, the drive, to become aware of our guiding axioms, so that we can really see, know, and understand — and question and, when necessary, revise or reject — the assumptions by which we conduct our lives. “The unexamined life is not worth living,” Socrates famously said. Today we’re living in a period where technology’s power within and over culture and human life is reaching a kind of critical mass, just as Neil Postman observed and prophesied in Technopoly: The Surrender of Culture to Technology — a book that begins with Postman’s recounting of the story of Thamus and his judgment on writing. We long ago passed the threshold where writing and literacy became an ineradicable and inescapable part of who we are, both societally and individually. As we hurtle toward a future along the lines of, perhaps, what the Singularitans are slavering to see, one of the simplest yet trickiest things we can do to keep our bearings and preserve our humanity — even as the very meaning of that word may begin to shift — is to remain awake and reflective about the changes it’s all working on our very souls.
Image credit: “Computer Business,” from Truthout.org under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)
Okay, so the headline I gave to this post is a bit slanted for rhetorical effect. When Eric Schmidt, Google’s 54-year-old chief execusive and chairman, spoke last Friday, January 29 at the World Economic Forum in Davos, Switzerland, he didn’t actually repeat and respond to the question contained in the sensationalistic headline of Nicholas Carr’s 2008 Atlantic article, “Is Google Making Us Stupid? What the Internet Is Doing to Our Brains.”
But, as reported by news outlets everywhere, he did voice some of the same concerns that Carr highlighted, and in very direct and forceful terms, too. Specifically, he expressed concern that today’s young people are experiencing serious impairment in their ability to perform “deep reading” as they grow up in our frenetic, buzzing, always-online environment of mobile electronic devices.
Even more specifically, he told his audience of world economic movers and shakers,
The one [thing] that I do worry about is the question of “deep reading”….As the world looks to these instantaneous devices…you spend less time reading all forms of literature, books, magazines and so forth….That probably has an effect on cognition, probably has an effect on reading.
Now that’s interesting! And it’s also — to give credit where scads are due — not all that surprising. Schmidt isn’t just mouthing these concerns, and his focus on them isn’t new. For instance, when he spoke in 2009 at the University of Pennsylvania’s commencement ceremony, he urged the new graduates to turn off their computers and discover the non-computerized world of human relationships and the natural environment.
And, interestingly, he may have been led at least in part to concentrate on such things by Carr’s very article, as recounted by Carr himself in a response to Schmidt’s Davos speech (“Eric Schmidt’s Second Thoughts,” Jan. 30). Carr traces how, first, Schmidt responded publicly to the “Stupid” article in 2008 by pointing out that people made the same ominous claims about plummeting intelligence levels and attention spans when color television was introduced, when MTV was launched, etc., and yet today “we’re smarter than ever.” But then a few months later, in early 2009, Schmidt told Charlie Rose,
I worry that the level of interrupt, the sort of overwhelming rapidity of information — and especially of stressful information — is in fact affecting cognition. It is in fact affecting deeper thinking. I still believe that sitting down and reading a book is the best way to really learn something. And I worry that we’re losing that.
And then came the Davos speech in early 2010.
Carr comments, “I’m glad Schmidt has continued to ponder this issue, and I salute him for having the courage to air his concerns publicly.”
To which I respond with a hearty “Ditto,” a thousand times over. For what it’s worth, I shamelessly love Google’s search engine, and have found myself deriving lots of benefit from some of its associated tools over the past year and two (Calendar, Reader, et al.). So I certainly can’t bash the company with a straight face. But its Leviathan-like rise to dominance is certainly valid cause for concern. A host of troubling moral questions arise from Google’s pervasive influence, as seen in its de facto ability to make or break businesses and companies with a mere tweak of its search algorithm, its veritably one-handed role in giving rise to SEO as a dominant marketing concern, its inherent growth into a threat to every business sector that it enters, and so on. And thus it’s deeply encouraging to see that Schmidt, who wields so much power over today’s social and business environment, possesses not just a high IQ and business smarts (he obviously didn’t attain the position of Google CEO by being a dummy) but a reflective sensibility for deeply human concerns to boot.