Blog Archives

Your smartphone is built to hijack and harvest your mind

At the beginning of each semester I tell my students the very thing that journalist Zat Rana gets at in a recent article for Quartz when I deliver a mini-sermon about my complete ban on phones — and also, for almost all purposes, laptops — in my classroom. A smartphone or almost any cell phone in your hand, on your desk, or even in your pocket as you’re trying to concentrate on important other things is a vampire demon powered by dystopian corporate overlords whose sole purpose is to suck your soul by siphoning away your attention and immersing you in a portable customized Matrix.

Or as Rana says, in less metaphorical language:

One of the biggest problems of our generation is that while the ability to manage our attention is becoming increasingly valuable, the world around us is being designed to steal away as much of it as possible….Companies like Google and Facebook aren’t just creating products anymore. They’re building ecosystems. And the most effective way to monetize an ecosystem is to begin with engagement. It’s by designing their features to ensure that we give up as much of our attention as possible.

Full Text: “Technology is destroying the most important asset in your life

Rana offers three pieces of sound advice for helping to reclaim your attention (which is the asset referred to in the title): mindfulness meditation, “ruthless single-tasking,” and regular periods of deliberate detachment from the digital world.

Interestingly, it looks like there’s a mini-wave of this type of awareness building in the mediasphere. Rana’s article for Quartz was published on October 2. Four days later The Guardian published a provocative and alarming piece with this teaser: “Google, Twitter and Facebook workers who helped make technology so addictive are disconnecting themselves from the internet. Paul Lewis reports on the Silicon Valley refuseniks alarmed by a race for human attention.” It’s a very long and in-depth article. Here’s a taste:

There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity — even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”

But those concerns are trivial compared with the devastating impact upon the political system that some of Rosenstein’s peers believe can be attributed to the rise of social media and the attention-based market that drives it. . . .

Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. . . .

“The dynamics of the attention economy are structurally set up to undermine the human will,” [ex-Google strategist James Williams] says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

“Will we be able to recognise it, if and when it happens?” Williams replies. “And if we can’t, then how do we know it hasn’t happened already?”

Full Text: “‘Our minds can be hijacked’: The tech insiders who fear a smartphone dystopia

In the same vein, Nicholas Carr (no stranger to The Teeming Brain’s pages) published a similarly aimed — and even a similarly titled — essay in the Weekend Review section of The Wall Street Journal on the very day the Guardian article appeared (October 6). “Research suggests that as the brain grows dependent on phone technology, the intellect weakens,” says the teaser. Here’s a representative passage from the essay itself:

Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object in the environment that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.” Media and communication devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.

But even in the history of captivating media, the smartphone stands out. It’s an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what [Adrian] Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it’s part of the surroundings — which it always is. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library, and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That’s what a smartphone represents to us. No wonder we can’t take our minds off it.

Full Text: “How Smartphones Hijack Our Minds

At his blog Carr noted the simultaneous appearance of his essay and the Guardian article on the same day. He also noted the striking coincidence of the similarity between the titles, calling it a “telling coincidence” and commenting:

It’s been clear for some time that smartphones and social-media apps are powerful distraction machines. They routinely divide our attention. But the “hijack” metaphor — I took it from Adrian Ward’s article “Supernormal” — implies a phenomenon greater and more insidious than distraction. To hijack something is to seize control of it from its rightful owner. What’s up for grabs is your mind.

Perhaps the most astonishing thing about all of this is that John Carpenter warned us about it three decades ago, and not vaguely, but quite specifically and pointedly. The only difference is that the technology in his (quasi-)fictional presentation was television. Well, that, plus the fact that his evil overlords really were ill-intentioned, whereas ours may be in some cases as much victims of their own devices as we are. In any event:

There is a signal broadcast every second of every day through our television sets. Even when the set is turned off, the brain receives the input. . . . Our impulses are being redirected. We are living in an artificially induced state of consciousness that resembles sleep. . . . We have been lulled into a trance.

Art, creativity, and what Google doesn’t know

From an essay by Ed Finn, founding director of the Center for Science and the Imagination at Arizona State University:

We are all centaurs now, our aesthetics continuously enhanced by computation. Every photograph I take on my smartphone is silently improved by algorithms the second after I take it. Every document autocorrected, every digital file optimised. Musicians complain about the death of competence in the wake of Auto-Tune, just as they did in the wake of the synthesiser in the 1970s. It is difficult to think of a medium where creative practice has not been thoroughly transformed by computation and an attendant series of optimisations. . . .

Today, we experience art in collaboration with these algorithms. How can we disentangle the book critic, say, from the highly personalised algorithms managing her notes, communications, browsing history and filtered feeds on Facebook and Instagram? . . . .

The immediate creative consequence of this sea change is that we are building more technical competence into our tools. It is getting harder to take a really terrible digital photograph, and in correlation the average quality of photographs is rising. From automated essay critiques to algorithms that advise people on fashion errors and coordinating outfits, computation is changing aesthetics. When every art has its Auto-Tune, how will we distinguish great beauty from an increasingly perfect average? . . .

We are starting to perceive the world through computational filters, perhaps even organising our lives around the perfect selfie or defining our aesthetic worth around the endorsements of computationally mediated ‘friends’. . . .

Human creativity has always been a response to the immense strangeness of reality, and now its subject has evolved, as reality becomes increasingly codeterminate, and intermingled, with computation. If that statement seems extreme, consider the extent to which our fundamental perceptions of reality — from research in the physical sciences to finance to the little screens we constantly interject between ourselves in the world — have changed what it means to live, to feel, to know. As creators and appreciators of the arts, we would do well to remember all the things that Google does not know.

FULL TEXT: “Art by Algorithm

 

Recommended Reading 38

Mexican Cartels Dispatch Trusted Agents to Live Deep Inside United States
The Washington Post (Associated Press), April 1, 2013

Mexican drug cartels whose operatives once rarely ventured beyond the U.S. border are dispatching some of their most trusted agents to live and work deep inside the United States — an emboldened presence that experts believe is meant to tighten their grip on the world’s most lucrative narcotics market and maximize profits. . . . [A] wide-ranging Associated Press review of federal court cases and government drug-enforcement data, plus interviews with many top law enforcement officials, indicate the groups have begun deploying agents from their inner circles to the U.S. Cartel operatives are suspected of running drug-distribution networks in at least nine non-border states, often in middle-class suburbs in the Midwest, South and Northeast. “It’s probably the most serious threat the United States has faced from organized crime,” said Jack Riley, head of the Drug Enforcement Administration’s Chicago office.

. . . . Years ago, Mexico faced the same problem — of then-nascent cartels expanding their power — “and didn’t nip the problem in the bud,” said Jack Killorin, head of an anti-trafficking program in Atlanta for the Office of National Drug Control Policy. “And see where they are now.” Riley sounds a similar alarm: “People think, ‘The border’s 1,700 miles away. This isn’t our problem.’ Well, it is. These days, we operate as if Chicago is on the border.”

. . . . “This is the first time we’ve been seeing it — cartels who have their operatives actually sent here,” said Richard Pearson, a lieutenant with the Louisville Metropolitan Police Department, which arrested four alleged operatives of the Zetas cartel in November in the suburb of Okolona. People who live on the tree-lined street where authorities seized more than 2,400 pounds of marijuana and more than $1 million in cash were shocked to learn their low-key neighbors were accused of working for one of Mexico’s most violent drug syndicates, Pearson said.

. . . . In Chicago, the police commander who oversees narcotics investigations, James O’Grady, said street-gang disputes over turf account for most of the city’s uptick in murders last year, when slayings topped 500 for the first time since 2008. Although the cartels aren’t dictating the territorial wars, they are the source of drugs. Riley’s assessment is stark: He argues that the cartels should be seen as an underlying cause of Chicago’s disturbingly high murder rate. “They are the puppeteers,” he said. “Maybe the shooter didn’t know and maybe the victim didn’t know that. But if you follow it down the line, the cartels are ultimately responsible.”

* * *

Google Revolution Isn’t Worth Our Privacy
Evgeny Morozov, Notes EM (reprinted from Financial Times), April 5, 2013

[EDITOR’S NOTE: Last year I abandoned Google’s search engine (for DuckDuckGo), Google Mail (for Zoho), Google Docs (for various substitutes), and Google Reader (for Netvibes) because of the company’s decision, mentioned by Morozov in this op-ed, to sew together privacy data from more than 60 of its products/services into a single, mega-master One Profile to Rule Them All for each of its users. Here, Morozov lays out some of the more far-reaching intentions behind, and meanings and implications of, Google’s move. Be sure to read his words in the mutually illuminating light of the article directly below about the new Facebook phone.]

Let’s give credit where it is due: Google is not hiding its revolutionary ambitions. As its co-founder Larry Page put it in 2004, eventually its search function “will be included in people’s brains” so that “when you think about something and don’t really know much about it, you will automatically get information”.

Science fiction? The implant is a rhetorical flourish but Mr Page’s utopian project is not a distant dream. In reality, the implant does not have be connected to our brains. We carry it in our pockets — it’s called a smartphone.

So long as Google can interpret — and predict — our intentions, Mr Page’s vision of a continuous and frictionless information supply could be fulfilled. However, to realise this vision, Google needs a wealth of data about us. Knowing what we search for helps — but so does knowing about our movements, our surroundings, our daily routines and our favourite cat videos.

. . . . [W]hen last year Google announced its privacy policy, which would bring the data collected through its more than 60 online services under one roof, that move made sense. The obvious reason for doing so is to make individual user profiles even more appealing to advertisers: when Google tracks you it can predict what ads to serve you much better than when it tracks you only across one such service.

But there is another reason, of course — and it has to do with the Grand Implant Agenda: the more Google knows about us, the easier it can make predictions about what we want – or will want in the near future. Google Now, the company’s latest offering, is meant to do just that: by tracking our every email, appointment and social networking activity, it can predict where we need to be, when, and with whom. Perhaps, it might even order a car to drive us there — the whole point is to relieve us of active decision-making. The implant future is already here — it’s just not evenly resisted.

* * *

The Soul of a New (Facebook) Machine
Alexis C. Madrigal, The Atlantic, April 4, 2013

Teaser:  Facebook finally brings a phone to market, sort of.

[T]he biggest play here is not technical or strategic, but rhetorical. Facebook wants to change the way people think about technologies. . . . Throughout Zuckerberg’s talk, people and Facebook friends were used interchangeably. And for Zuckerberg and his employees, I think this is technically true. For them, all the people they care about are not only on Facebook, but active users who devote time and resources to building digital streams that are legible to other people as their lives. So, while you can read the Facebook phone announcement as the story of the company’s deeper integration with Google’s Android operating system, I also read Facebook Home as a story of the integration that Facebook’s employees have with their own product. And they’d like for the rest of the world to experience what they do.

. . . . Why do I think it is so important not to allow Zuckerberg to redefine “people” as “Facebook friends”? Because we need to be able to evaluate this technology’s impact very specifically within Facebook’s culture and aims.  Facebook Home is not a story about “making the world more open and connected,” in general. This a story about Facebook “making the world more open and connected,” with all the specific definitions the company brings to those ideas.

. . . . It’s not that I think Facebook communications are inferior to other ones, whether that’s face-to-face, Twitter, talking on the phone, or standard text messaging. That’s not the point. The point is that they are *not the same* as these other things.

. . . . Will it be worth opening up every part of your phone interaction to Facebook in order to access that experience? Do you want your definition of a computer to center on Facebook Friends and the limited et [sic] of actions you can take with them? I can’t answer that for you, but I can say that it is a tradeoff, and the more you think about it, the better.

* * *

The Meme Hustler
Evgeny Morozov, The Baffler No. 22 (April 8, 2013)

[EDITOR’S NOTE: Yes, it’s Morozov again. The man is all but ubiquitous today, and that’s a good thing, because he’s pointedly worth listening to. In the case of this particular piece, he’s pointedly worth listening to very slowly and deeply, because this is some seriously insightful — and darkly, counterculturally revolutionary — stuff that he’s laying out about the hijacking of our collective cultural discourse by a kind of linguistic-conceptual virus that disguises the ideological core assumptions of digital techno-utopianism under a cloak of inevitability, so that any serious critical examination of them becomes literally unthinkable.]

While the brightest minds of Silicon Valley are “disrupting” whatever industry is too crippled to fend off their advances, something odd is happening to our language. Old, trusted words no longer mean what they used to mean; often, they don’t mean anything at all. Our language, much like everything these days, has been hacked. Fuzzy, contentious, and complex ideas have been stripped of their subversive connotations and replaced by cleaner, shinier, and emptier alternatives; long-running debates about politics, rights, and freedoms have been recast in the seemingly natural language of economics, innovation, and efficiency. Complexity, as it turns out, is not particularly viral.

. . . [A] clique of techno-entrepreneurs has hijacked our language and, with it, our reason. In the last decade or so, Silicon Valley has triggered its own wave of linguistic innovation, a wave so massive that a completely new way to analyze and describe the world — a silicon mentality of sorts — has emerged in its wake. The old language has been rendered useless; our pre-Internet vocabulary, we are told, needs an upgrade.

. . . That we would eventually be robbed of a meaningful language to discuss technology was entirely predictable. That the conceptual imperialism of Silicon Valley would also pollute the rest of our vocabulary wasn’t.

The enduring emptiness of our technology debates has one main cause, and his name is Tim O’Reilly. The founder and CEO of O’Reilly Media, a seemingly omnipotent publisher of technology books and a tireless organizer of trendy conferences, O’Reilly is one of the most influential thinkers in Silicon Valley. Entire fields of thought — from computing to management theory to public administration — have already surrendered to his buzzwordophilia, but O’Reilly keeps pressing on. Over the past fifteen years, he has given us such gems of analytical precision as “open source,” “Web 2.0,” “government as a platform,” and “architecture of participation.” O’Reilly doesn’t coin all of his favorite expressions, but he promotes them with religious zeal and enviable perseverance. While Washington prides itself on Frank Luntz, the Republican strategist who rebranded “global warming” as “climate change” and turned “estate tax” into “death tax,” Silicon Valley has found its own Frank Luntz in Tim O’Reilly.

* * *

Grof on Giger: The Transpersonal Nature of Art, Inspiration, and Creativity
Karey Pohn, Association for Holotropic Breathwork International, February 28, 2013 (reprinted from The Inner Door, May 2010)

[EDITOR’S NOTE: Stanislav Grof is an icon and a legend in the field of transpersonal psychology, and is one of the field’s founders. H. R. Giger is an icon and a legend in the world of art, having made his mark as a painter, sculptor, and set designer with a genius for the dark and surreal, with his most famous work probably being his Academy Award-winning design of the aliens and their environment in the Alien film franchise, followed closely by his breathtaking semi-Lovecraft-inspired paintings in the 1977 book Necronomicon. In this interview, Grof muses — pun definitely intended — on the transpersonal/transcendent sources of Giger’s inspiration.]

I first encountered his work in Necronomicon, which was a large format, high-quality paperback. I couldn’t believe what I saw. It was absolutely amazing. Now, I have a good understanding of him, not only because we have spent a lot of personal time together, but I had the chance to interview him for many, many hours for the book; and during that time, I was able to find out not only about his life but also about how he works.

It’s extraordinary. Some of his large paintings cover one wall in his house, and these amazing compositions are frequently arranged symmetrically. I found out that particularly when he is working with an airbrush, he has absolutely no idea what he is painting. He just begins in the left upper corner and aims the airbrush at the canvas. Then, as he told me, something just comes through, and he is himself surprised by what emerges.

In discussing Giger’s genius, I quote what Friedrich Nietzsche wrote in Thus Spoke Zarathustra (1885) about his own state of consciousness while creating:

If one had the smallest vestige of superstition left in one, it would hardly be possible to set aside the idea that one is mere incarnation, mouthpiece, or medium of an almighty power. The idea of revelation, in the sense that something, which profoundly convulses and shatters one, become suddenly visible and audible with indescribable certainty and accuracy, describes the simple fact. One hears—one does not seek; one takes—one does not ask who gives; a thought suddenly flashes up like lightning, it comes with necessity, without faltering—I never had any choice in the matter.

In essence, something grabs you and comes through, and you basically become a channel for it. You’re not really the creator of it. You’re a mediator. Hans Rudi certainly falls into that category.

* * *

The Visionary World of H. R. Giger (pdf), a.k.a. H. R. Giger and the Zeitgeist of the Twentieth Century
Stanislav Grof, October 2005

Several years ago, I had the privilege and pleasure to spend some time with Oliver Stone, visionary genius who has portrayed in his films with extraordinary artistic power the shadow side of modern humanity. At one point, we talked about Ridley Scott’s movie Alien and the discussion focused on H. R. Giger, whose creature and set designs were the key element in the film’s success. In the 1979 Academy Awards ceremony held at the Dorothy Chandler Pavilion in Los Angeles in April 1980, Giger received for his work on the Alien an Oscar for best achievement in visual effects.

I have known Giger’s work since the publication of his Necronomicon and have always felt a deep admiration for him, not only as an artistic genius, but also a visionary with an uncanny ability to depict the deep dark recesses of the human psyche revealed by modern consciousness research. In our discussion, I shared my feelings with Oliver Stone, who turned out to be himself a great admirer of Giger. His opinion about Giger and his place in the world of art and in human culture was very original and interesting. “I do not know anybody else,” he said, “who has so accurately portrayed the soul of modern humanity. A few decades from now when they will talk about the twentieth century, they will think of Giger.”

Although Oliver Stone’s statement momentarily surprised me by its extreme nature, I immediately realized that it reflected a profound truth. Since then, I often recalled this conversation when I was confronted with various disturbing aspects of the western industrial civilization and with the alarming developments in the countries affected by technological progress. There is no other artist who has captured with equal power the ills plaguing modern society – the rampaging technology taking over human life, suicidal destruction of the eco system of the earth, violence reaching apocalyptic proportions, sexual excesses, insanity of life driving people to mass consumption of tranquilizers and narcotic drugs, and the alienation individuals experience in relation to their bodies, to each other, and to nature.

. . . Giger’s art clearly comes from the depth of the collective unconscious, especially when we consider his prolific creative process. He reports that he often has no a priori concept of what a painting would look like. When creating some of his giant paintings, for instance, he started in the upper left corner and aimed the airbrush toward the canvas. The creative force was simply pouring through him, and he became its instrument. And yet the end result was a perfect composition and often showed remarkable bilateral symmetry.

. . . Giger’s determined quest for creative self-expression is inseparable from his relentless self-exploration and self-healing. In the analytic psychology of C. G. Jung, integration of the Shadow and the Anima, two quintessential motifs in Giger’s art, are seen as critical therapeutic steps in what Jung calls the process of individuation. Giger himself experiences his art as healing and as an important way to maintain his sanity. His art can also have a healing impact on those who are open to it because, like a Greek tragedy, it can facilitate powerful emotional catharsis for the viewers by exposing and revealing dark secrets of the human psyche.

Recommended Reading 36

This week: How entire U.S. towns now rely on food stamps. The regrets of the Iraqi “sledgehammer man,” whose image became famous in Western media when Saddam’s statue fell. The Obama administration’s epic (and hypocritical) focus on secrecy. The demise of Google Reader and what it portends for Net-i-fied life and culture. The sinister rise of an all-pervasive — and unblinkingly embraced — Orwellian Big Brotherism in the age of Big Data, with a focus on Facebook’s “Like” button, Google Glass, and Google’s vision of “a future of frictionless, continuous shopping.” A surge of ghost sightings and spiritual troubles among survivors of Japan’s earthquake and tsunami. The rise of the “Little Free Libraries” movement in America and abroad. Read the rest of this entry

Recommended Reading 17

 

This week’s recommendations encompass the spiritual past and future of money and capitalism; the use of neuroscience by tech companies to profit from Internet addiction; the future of books, libraries, and old movies in an age of digital instant gratification and a perpetually shrinking historical awareness; the deep appeal of fairy tales; thoughts on a new future for the debate over paranormal abilities; a riveting first-person account of what it’s like to live with cosmically horrifying panic attacks, and of the way these impact a person’s worldview; and a nice compilation of speech excerpts from Robert Anton Wilson about the nature of reality.

Read the rest of this entry

Google: Not making us stupid, not making us smart

A recently published essay by University of Virginia professor Chad Wellmon in The Hedgehog Review stands as one of the most elegant, incisive, and persuasive entries I’ve yet read in the great debate over the effects of the Internet/digital media revolution on human consciousness and culture. And I’ve read a fair amount of them.

Wellmon says:

On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power of Facebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge…[They say] Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information. Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarter and ultimately liberate us. These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what were once the historical limits of humanity: physical, intellectual, and psychological

Read the rest of this entry

Your personal filter bubble, or What Facebook and Google are hiding from you

You would have had to be hiding under the proverbial rock in order to avoid hearing about the concept of the “filter bubble” in the past year. It comes from peace activist and MoveOn.org cofounder Eli Pariser’s 2011 book The Filter Bubble: What the Internet Is Hiding from You. The basic idea is that the rise of “personalization” in Internet searches — the tendency of Google and Facebook and Netflix and other prominent online services to use complex algorithms to gradually tailor search results to the perceived preferences of each user — ends up blocking out the true fullness and richness of the world of information and ideas. “Search engines weight our search results to our own preferences. (My search results won’t look like yours.) Sites will filter our news (without asking us) to bring us what they think we want,” a reviewer for The Christian Science Monitor summarizes. “The consequences of this social engineering, Pariser argues, is that we interact more with people who think like we do. Rather than fulfilling the early Internet dreams of diversity and freedom of choice, we are living in an echo chamber.” Read the rest of this entry

From Google’s “in-house philosopher,” a beautiful credo in defense of studying the humanities

Here at The Teeming Brain I’ve gone on at some length about the disastrous/dystopian trends in contemporary American education, including, especially, the rise of the techno-corporate consumer model that assigns a purely economic raison d’etre to higher education. (See, for example, my “America’s Colleges at a Crossroads” series and additional articles.) Today I’m fascinated, and rather psyched, to discover a smart and forceful statement in favor of pursuing a humanities-oriented education, written by somebody who earned a grad degree at MIT and then launched into a lucrative career in computer programming, only to abandon it a few years later to earn a Ph.D. in philosophy because his technological interests organically led him to a passionate personal focus on philosophical matters.

Damon Horowitz’s bio says he “is currently in-house philosopher at Google” — an intriguing job title if ever I heard one — and his essay published yesterday (July 17) at The Chronicle of Higher Education is described as “an excerpt of a keynote address he gave in the spring at the BiblioTech conference at Stanford University.” A quick Google search reveals that the address itself was titled “Why You Should Quit Your Technology Job and Get a Humanities Ph.D.”

Here are some choice highlights from a highlight-filled essay that’s quotable almost in toto:

I wanted to better understand what it was about how we were defining intelligence that was leading us astray: What were we failing to understand about the nature of thought in our attempts to build thinking machines? And, slowly, I realized that the questions I was asking were philosophical questions — about the nature of thought, the structure of language, the grounds of meaning. So if I really hoped to make major progress in AI, the best place to do this wouldn’t be another AI lab. If I really wanted to build a better thinker, I should go study philosophy.

[…]

In learning the limits of my technologist worldview, I didn’t just get a few handy ideas about how to build better AI systems. My studies opened up a new outlook on the world. I would unapologetically characterize it as a personal intellectual transformation: a renewed appreciation for the elements of life that are not scientifically understood or technologically engineered.

In other words: I became a humanist.

[…]

Maybe you, too, are disposed toward critical thinking. Maybe, despite the comfort and security that your job offers, you, too, have noticed cracks in the technotopian bubble.

Maybe you are worn out by endless marketing platitudes about the endless benefits of your products; and you’re not entirely at ease with your contribution to the broader culture industry. Maybe you are unsatisfied by oversimplifications in the product itself. What exactly is the relationship created by “friending” someone online? How can your online profile capture the full glory of your performance of self? Maybe you are cautious about the impact of technology. You are startled that our social-entertainment Web sites are playing crucial roles in global revolutions. You wonder whether those new tools, like any weapons, can be used for evil as well as good, and you are reluctant to engage in the cultural imperialism that distribution of a technology arguably entails.

[…]

[D]o you really value your mortgage more than the life of the mind? What is the point of a comfortable living if you don’t know what the humanities have taught us about living well? If you already have a job in the technology industry, you are already significantly more wealthy than the vast majority of our planet’s population. You already have enough.

If you are worried about your career, I must tell you that getting a humanities Ph.D. is not only not a danger to your employability, it is quite the opposite. I believe there no surer path to leaping dramatically forward in your career than to earn a Ph.D. in the humanities. Because the thought leaders in our industry are not the ones who plodded dully, step by step, up the career ladder. The leaders are the ones who took chances and developed unique perspectives.

Complete text at The Chronicle‘s site: “From Technologist to Philosopher

What’s more, Horowitz’s speech is on Youtube:

The Google Effect: New evidence of the Internet’s impact on brain and memory recalls Plato’s ancient warning

It’s not every day you get to note/observe/say something like this: A 2400-year-old warning from Plato has just been confirmed, or at least inadvertently recalled, by newly published research about the cognitive and neurological effects of our now-ubiquitous culture of Internet searching.

Here’s the lowdown:

Researchers at Columbia University. . . say Google and its search-engine brethren have started to reshape your brain, making you more likely to forget information that is only a quick Internet search away. The new research, published in Science magazine, suggests that people are adapting to the very existence of search engines. For most of us, the thinking goes, the “what” isn’t what matters now; it’s the “where,” as in where can you find the information.

In a series of experiments, researchers found that subjects “were significantly more likely to remember information if they thought they would not be able to find it later,” as the New York Times puts it. When subjects were given information and folder names in which the info was stored in one test, they were more likely to recall the folder names than the information itself. The researchers, naturally, have coined a term for this development: the “Google effect.”

The Internet has become “an external memory source that we can access at any time,” Betsy Sparrow, the study’s principle researcher, explains on Columbia’s website.

— “Study Shows Internet Alters Memory,” Christina Gossmann, Slate, July 15, 2011

And here’s the Plato connection: In the Phaedrus, a dialogue written circa 370 B.C.E., Plato depicted his teacher Socrates telling the story of Thamus, a great Egyptian king who once entertained the god Theuth, inventor of mathematics, astronomy, and many other such things, including, most famously, writing. (Obviously, Theuth is probably Plato/Socrates’ variation on Thoth or Hermes.) Theuth showed Thamus many of his inventions, and Thamus praised them all. But Theuth was especially proud of his invention of writing, and he introduced it to Thamus by saying, “Here is an accomplishment, my lord the King, which will improve both the wisdom and the memory of the Egyptians. I have discovered a sure receipt for memory and wisdom.”

In a needle-scratching response that clashes jaggedly with our modern-day cultural assumptions about the supreme intellectual value of writing and literacy, Thamus vigorously disagreed that writing was a good thing. The wording of his reply makes it sound like he was peering through a wormhole into the 21st century and reading the new Columbia University report:

You, who are the father of writing, have out of fondness for your offspring attributed to it quite the opposite of its real function. Those who acquire it will cease to exercise their memory and become forgetful; they will rely on writing to bring things to their remembrance by external signs instead of by their own internal resources. What you have discovered is a receipt for recollection, not for memory.

Phaedrus, trans. Walter Hamilton

So what’s to make of this? For one thing, it’s instructive to consider the possible negative effects of the phenomenon in question. Nicole Ferraro, writing for Internet Evolution, offers these thoughts on the Columbia study’s findings:

To tell the truth, we’ve done this to ourselves: Why know directions when we can get turn-by-turn directions on our iPhones? Why remember someone’s email address when Gmail is going to produce it automatically when we begin typing letters? I mean, why remember any fact that can easily be pulled up on the search engines we carry in our pockets? And how are we expected to remember information when we’re consuming so much at once and jumping from task to task?

So, yes, this was bound to happen. But what are the implications of this? Would you agree with the headline on this Register article about the same study?: “Google turning us into forgetful morons.”

— “Redefining ‘Knowledge’ in the Age of Google,” Nicole Ferraro, Internet Evolution, July 15, 2011

But Ferraro’s concerns about forgotten directions and phone numbers and email addresses pale in comparison to the dire moral/intellectual/social prognosis that Thamus drew from his insight:

And as for wisdom, your pupils will have the reputation for it without the reality: they will receive a quantity of information without proper instructions, and in consequence be thought very knowledgeable when they are for the most part quite ignorant. And because they are filled with the conceit of wisdom instead of real wisdom they will be a burden to society.

The aforementioned Betsy Sparrow, head of the Columbia University research team, told The New York Times “her experiments had led her to conclude that the Internet has become our primary external storage system. ‘Human memory,’ she said, ‘is adapting to new communications technology.'” Inspired by Thamus, Plato, and all of the above, we might pause to notice the troubling conundrum built into the very idea of an “external storage medium” for the human mind. Don’t get me wrong, I’m not knocking literacy. There’s obviously a very strong case to be made for the idea that it’s precisely our development of external storage media — books and so on, and now the Internet — that has allowed us to accomplish so many of the great things that we as a species have accomplished, and that the very definition of wisdom must now involve literacy, and not just bare-bones reading and writing skills as promoted by godawful social engineering programs like No Child Left Behind, but a foundational knowledge of the great things that have been thought, said, and written in the past, along with — to ping another aspect of the Columbia report — the increasingly important ability to access information accurately. Profound forgetfulness of the past, now preserved in writing and increasingly in digital form, is the very definition of a dystopian dark age.

But that said, we’ll all be well-advised to keep an eye and ear on the judgment of Thamus as we live our way inevitably into the Brave New World of our collective cyberfuture. A crucial aspect of authentic wisdom is the ability, and more, the drive, to become aware of our guiding axioms, so that we can really see, know, and understand — and question and, when necessary, revise or reject — the assumptions by which we conduct our lives. “The unexamined life is not worth living,” Socrates famously said. Today we’re living in a period where technology’s power within and over culture and human life is reaching a kind of critical mass, just as Neil Postman observed and prophesied in Technopoly: The Surrender of Culture to Technology — a book that begins with Postman’s recounting of the story of Thamus and his judgment on writing. We long ago passed the threshold where writing and literacy became an ineradicable and inescapable part of who we are, both societally and individually. As we hurtle toward a future along the lines of, perhaps, what the Singularitans are slavering to see, one of the simplest yet trickiest things we can do to keep our bearings and preserve our humanity — even as the very meaning of that word may begin to shift — is to remain awake and reflective about the changes it’s all working on our very souls.

Image credit: “Computer Business,” from Truthout.org under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Google CEO Worries that Google Is Making Us Stupid

Google CEO Eric Schmidt speaking at a past World Economic Forum (2008)

Okay, so the headline I gave to this post is a bit slanted for rhetorical effect. When Eric Schmidt, Google’s 54-year-old chief execusive and chairman, spoke last Friday, January 29 at the World Economic Forum in Davos, Switzerland, he didn’t actually repeat and respond to the question contained in the sensationalistic headline of Nicholas Carr’s 2008 Atlantic article, “Is Google Making Us Stupid? What the Internet Is Doing to Our Brains.”

But, as reported by news outlets everywhere, he did voice some of the same concerns that Carr highlighted, and in very direct and forceful terms, too. Specifically, he expressed concern that today’s young people are experiencing serious impairment in their ability to perform “deep reading” as they grow up in our frenetic, buzzing, always-online environment of mobile electronic devices.

Even more specifically, he told his audience of world economic movers and shakers,

The one [thing] that I do worry about is the question of “deep reading”….As the world looks to these instantaneous devices…you spend less time reading all forms of literature, books, magazines and so forth….That probably has an effect on cognition, probably has an effect on reading.

Now that’s interesting! And it’s also — to give credit where scads are due — not all that surprising. Schmidt isn’t just mouthing these concerns, and his focus on them isn’t new. For instance, when he spoke in 2009 at the University of Pennsylvania’s commencement ceremony, he urged the new graduates to turn off their computers and discover the non-computerized world of human relationships and the natural environment.

And, interestingly, he may have been led at least in part to concentrate on such things by Carr’s very article, as recounted by Carr himself in a response to Schmidt’s Davos speech (“Eric Schmidt’s Second Thoughts,” Jan. 30). Carr traces how, first, Schmidt responded publicly to the “Stupid” article in 2008 by pointing out that people made the same ominous claims about plummeting intelligence levels and attention spans when color television was introduced, when MTV was launched, etc., and yet today “we’re smarter than ever.” But then a few months later, in early 2009, Schmidt told Charlie Rose,

I worry that the level of interrupt, the sort of overwhelming rapidity of information — and especially of stressful information — is in fact affecting cognition. It is in fact affecting deeper thinking. I still believe that sitting down and reading a book is the best way to really learn something.
 And I worry that we’re losing that.

And then came the Davos speech in early 2010.

Carr comments, “I’m glad Schmidt has continued to ponder this issue, and I salute him for having the courage to air his concerns publicly.”

To which I respond with a hearty “Ditto,” a thousand times over. For what it’s worth, I shamelessly love Google’s search engine, and have found myself deriving lots of benefit from some of its associated tools over the past year and two (Calendar, Reader, et al.). So I certainly can’t bash the company with a straight face. But its Leviathan-like rise to dominance is certainly valid cause for concern. A host of troubling moral questions arise from Google’s pervasive influence, as seen in its de facto ability to make or break businesses and companies with a mere tweak of its search algorithm, its veritably one-handed role in giving rise to SEO as a dominant marketing concern, its inherent growth into a threat to every business sector that it enters, and so on. And thus it’s deeply encouraging to see that Schmidt, who wields so much power over today’s social and business environment, possesses not just a high IQ and business smarts (he obviously didn’t attain the position of Google CEO by being a dummy) but a reflective sensibility for deeply human concerns to boot.

I worry that the level of interrupt, the sort of overwhelming rapidity of information — and especially of stressful information — is in fact affecting cognition. It is in fact affecting deeper thinking. I still believe that sitting down and reading a book is the best way to really learn something.
 And I worry that we’re losing that

Image Credit: http://www.flickr.com/photos/worldeconomicforum/ / CC BY-SA 2.0