Category Archives: Science & Technology

Your smartphone is built to hijack and harvest your mind

At the beginning of each semester I tell my students the very thing that journalist Zat Rana gets at in a recent article for Quartz when I deliver a mini-sermon about my complete ban on phones — and also, for almost all purposes, laptops — in my classroom. A smartphone or almost any cell phone in your hand, on your desk, or even in your pocket as you’re trying to concentrate on important other things is a vampire demon powered by dystopian corporate overlords whose sole purpose is to suck your soul by siphoning away your attention and immersing you in a portable customized Matrix.

Or as Rana says, in less metaphorical language:

One of the biggest problems of our generation is that while the ability to manage our attention is becoming increasingly valuable, the world around us is being designed to steal away as much of it as possible….Companies like Google and Facebook aren’t just creating products anymore. They’re building ecosystems. And the most effective way to monetize an ecosystem is to begin with engagement. It’s by designing their features to ensure that we give up as much of our attention as possible.

Full Text: “Technology is destroying the most important asset in your life

Rana offers three pieces of sound advice for helping to reclaim your attention (which is the asset referred to in the title): mindfulness meditation, “ruthless single-tasking,” and regular periods of deliberate detachment from the digital world.

Interestingly, it looks like there’s a mini-wave of this type of awareness building in the mediasphere. Rana’s article for Quartz was published on October 2. Four days later The Guardian published a provocative and alarming piece with this teaser: “Google, Twitter and Facebook workers who helped make technology so addictive are disconnecting themselves from the internet. Paul Lewis reports on the Silicon Valley refuseniks alarmed by a race for human attention.” It’s a very long and in-depth article. Here’s a taste:

There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity — even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”

But those concerns are trivial compared with the devastating impact upon the political system that some of Rosenstein’s peers believe can be attributed to the rise of social media and the attention-based market that drives it. . . .

Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. . . .

“The dynamics of the attention economy are structurally set up to undermine the human will,” [ex-Google strategist James Williams] says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

“Will we be able to recognise it, if and when it happens?” Williams replies. “And if we can’t, then how do we know it hasn’t happened already?”

Full Text: “‘Our minds can be hijacked’: The tech insiders who fear a smartphone dystopia

In the same vein, Nicholas Carr (no stranger to The Teeming Brain’s pages) published a similarly aimed — and even a similarly titled — essay in the Weekend Review section of The Wall Street Journal on the very day the Guardian article appeared (October 6). “Research suggests that as the brain grows dependent on phone technology, the intellect weakens,” says the teaser. Here’s a representative passage from the essay itself:

Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object in the environment that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.” Media and communication devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.

But even in the history of captivating media, the smartphone stands out. It’s an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what [Adrian] Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it’s part of the surroundings — which it always is. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library, and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That’s what a smartphone represents to us. No wonder we can’t take our minds off it.

Full Text: “How Smartphones Hijack Our Minds

At his blog Carr noted the simultaneous appearance of his essay and the Guardian article on the same day. He also noted the striking coincidence of the similarity between the titles, calling it a “telling coincidence” and commenting:

It’s been clear for some time that smartphones and social-media apps are powerful distraction machines. They routinely divide our attention. But the “hijack” metaphor — I took it from Adrian Ward’s article “Supernormal” — implies a phenomenon greater and more insidious than distraction. To hijack something is to seize control of it from its rightful owner. What’s up for grabs is your mind.

Perhaps the most astonishing thing about all of this is that John Carpenter warned us about it three decades ago, and not vaguely, but quite specifically and pointedly. The only difference is that the technology in his (quasi-)fictional presentation was television. Well, that, plus the fact that his evil overlords really were ill-intentioned, whereas ours may be in some cases as much victims of their own devices as we are. In any event:

There is a signal broadcast every second of every day through our television sets. Even when the set is turned off, the brain receives the input. . . . Our impulses are being redirected. We are living in an artificially induced state of consciousness that resembles sleep. . . . We have been lulled into a trance.

Art, creativity, and what Google doesn’t know

From an essay by Ed Finn, founding director of the Center for Science and the Imagination at Arizona State University:

We are all centaurs now, our aesthetics continuously enhanced by computation. Every photograph I take on my smartphone is silently improved by algorithms the second after I take it. Every document autocorrected, every digital file optimised. Musicians complain about the death of competence in the wake of Auto-Tune, just as they did in the wake of the synthesiser in the 1970s. It is difficult to think of a medium where creative practice has not been thoroughly transformed by computation and an attendant series of optimisations. . . .

Today, we experience art in collaboration with these algorithms. How can we disentangle the book critic, say, from the highly personalised algorithms managing her notes, communications, browsing history and filtered feeds on Facebook and Instagram? . . . .

The immediate creative consequence of this sea change is that we are building more technical competence into our tools. It is getting harder to take a really terrible digital photograph, and in correlation the average quality of photographs is rising. From automated essay critiques to algorithms that advise people on fashion errors and coordinating outfits, computation is changing aesthetics. When every art has its Auto-Tune, how will we distinguish great beauty from an increasingly perfect average? . . .

We are starting to perceive the world through computational filters, perhaps even organising our lives around the perfect selfie or defining our aesthetic worth around the endorsements of computationally mediated ‘friends’. . . .

Human creativity has always been a response to the immense strangeness of reality, and now its subject has evolved, as reality becomes increasingly codeterminate, and intermingled, with computation. If that statement seems extreme, consider the extent to which our fundamental perceptions of reality — from research in the physical sciences to finance to the little screens we constantly interject between ourselves in the world — have changed what it means to live, to feel, to know. As creators and appreciators of the arts, we would do well to remember all the things that Google does not know.

FULL TEXT: “Art by Algorithm

 

Bem’s Precognition Research and the Crisis in Contemporary Science

Recently Daryl Bem defended his famous research into precognition in a letter to The Chronicle of Higher Education. More recently, as in this week, Salon published a major piece about Bem and his research that delves deeply into its implications for the whole of contemporary science —  especially psychology and the other social sciences (or “social sciences”), but also the wider of world of science in general — and shows how Bem’s research, and the reactions to it, have highlighted, underscored, and called out some very serious problems:

Bem’s 10-year investigation, his nine experiments, his thousand subjects—all of it would have to be taken seriously. He’d shown, with more rigor than anyone ever had before, that it might be possible to see into the future. Bem knew his research would not convince the die-hard skeptics. But he also knew it couldn’t be ignored.

When the study went public, about six months later, some of Bem’s colleagues guessed it was a hoax. Other scholars, those who believed in ESP — theirs is a small but fervent field of study — saw his paper as validation of their work and a chance for mainstream credibility.

But for most observers, at least the mainstream ones, the paper posed a very difficult dilemma. It was both methodologically sound and logically insane. Daryl Bem had seemed to prove that time can flow in two directions — that ESP is real. If you bought into those results, you’d be admitting that much of what you understood about the universe was wrong. If you rejected them, you’d be admitting something almost as momentous: that the standard methods of psychology cannot be trusted, and that much of what gets published in the field — and thus, much of what we think we understand about the mind — could be total bunk.

If one had to choose a single moment that set off the “replication crisis” in psychology — an event that nudged the discipline into its present and anarchic state, where even textbook findings have been cast in doubt — this might be it: the publication, in early 2011, of Daryl Bem’s experiments on second sight.

The replication crisis as it’s understood today may yet prove to be a passing worry or else a mild problem calling for a soft corrective. It might also grow and spread in years to come, flaring from the social sciences into other disciplines, burning trails of cinder through medicine, neuroscience, and chemistry. It’s hard to see into the future. But here’s one thing we can say about the past: The final research project of Bem’s career landed like an ember in the underbrush and set his field ablaze. . . .

When Bem started investigating ESP, he realized the details of his research methods would be scrutinized with far more care than they had been before. In the years since his work was published, those higher standards have increasingly applied to a broad range of research, not just studies of the paranormal. “I get more credit for having started the revolution in questioning mainstream psychological methods than I deserve,” Bem told me. “I was in the right place at the right time. The groundwork was already pre-prepared, and I just made it all startlingly clear.”

Looking back, however, his research offered something more than a vivid illustration of problems in the field of psychology. It opened up a platform for discussion. Bem hadn’t simply published a set of inconceivable findings; he’d done so in a way that explicitly invited introspection. In his paper proving ESP is real, Bem used the word replication 33 times. Even as he made the claim for precognition, he pleaded for its review.

“Credit to Daryl Bem himself,” [University of California-Berkeley business school professor] Leif Nelson told me. “He’s such a smart, interesting man. . . . In that paper, he actively encouraged replication in a way that no one ever does. He said, ‘This is an extraordinary claim, so we need to be open with our procedures.’ . . . It was a prompt for skepticism and action.”

Bem meant to satisfy the skeptics, but in the end he did the opposite: He energized their doubts and helped incite a dawning revolution. Yet again, one of the world’s leading social psychologists had made a lasting contribution and influenced his peers. “I’m sort of proud of that,” Bem conceded at the end of our conversation. “But I’d rather they started to believe in psi as well. I’d rather they remember my work for the ideas.”

Note that the article also contains, in its middle section, a fascinating personal profile and mini-biography of Bem himself, including a recounting of his life-long interest in mentalism, which began in his teen years and persisted into his career in academia:

As a young professor at Carnegie Mellon University, Bem liked to close out each semester by performing as a mentalist. After putting on his show, he’d tell his students that he didn’t really have ESP. In class, he also stressed how easily people can be fooled into believing they’ve witnessed paranormal phenomena.

FULL TEXT: Daryl Bem Proved ESP Is Real. Which Means Science Is Broken.

Orwell Meets Frankenstein: The Internet as a Monster of Mass Surveillance and Social Control

The following paragraphs are from a talk delivered by Pinboard founder Maciej Cegłowski at the recent Emerging Technologies for the Enterprise conference in Philadelphia.  Citing as Exhibit A the colossal train wreck that was the 2016 American presidential election, Cegłowski basically explains how, in the current version of the Internet that has emerged over the past decade-plus, we have collectively created a technology that is perfectly calibrated for undermining Western democratic societies and ideals.

But as incisive as his analysis is, I seriously doubt that his (equally incisive) proposed solutions, described later in the piece, will ever be implemented to any meaningful extent. I mean, if we’re going to employ the explicitly Frankensteinian metaphor of “building a monster,” then it’s important to bear in mind that Victor Frankenstein and his wretched creation did not find their way to anything resembling a happy ending. (And note that Cegłowski himself acknowledges as much at the end of his piece when he closes his discussion of proposed solutions by asserting that “even though we’re likely to fail, all we can do is try.”)

This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things,” some of what we knocked down were the load-bearing walls of our democracy. . . .

A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 [sic] election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea. . . .

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we’re good people. We like freedom. How could we have built tools that subvert it? . . .

The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills. . . .

Orwell imagined a world in which the state could shamelessly rewrite the past. The Internet has taught us that people are happy to do this work themselves, provided they have their peer group with them, and a common enemy to unite against. They will happily construct alternative realities for themselves, and adjust them as necessary to fit the changing facts . . . .

A lot of what we call “disruption” in the tech industry has just been killing flawed but established institutions, and mining them for parts. When we do this, we make a dangerous assumption about our ability to undo our own bad decisions, or the time span required to build institutions that match the needs of new realities.

Right now, a small caste of programmers is in charge of the surveillance economy, and has broad latitude to change it. But this situation will not last for long. The kinds of black-box machine learning that have been so successful in the age of mass surveillance are going to become commoditized and will no longer require skilled artisans to deploy. . . .

Unless something happens to mobilize the tech workforce, or unless the advertising bubble finally bursts, we can expect the weird, topsy-turvy status quo of 2017 to solidify into the new reality.

FULL TEXT: “Build a Better Monster: Morality, Machine Learning, and Mass Surveillance

Big Data, Artificial Intelligence, and Dehumanization: Surrendering to the Death of Democracy

 

Greetings, Teeming Brainers. I’m just peeking in from the digital wings, amid much ongoing blog silence, to observe that many of the issues and developments — sociocultural, technological, and more — that I began furiously tracking here way back in 2006 are continuing to head in pretty much the same direction. A case in point is provided by the alarming information, presented in a frankly alarmed tone, that appears in this new piece from Scientific American (originally published in SA’s German-language sister publication, Spektrum der Wissenschaft):

Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This information is based on personal and meta-data that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially, the identity of the user is protected, it can, in practice, be inferred quite easily. Today, algorithms know pretty well what we do, what we think and how we feel — possibly even better than our friends and family or even ourselves. Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled ever more successfully in this manner. The more is known about us, the less likely our choices are to be free and not predetermined by others.

But it won’t stop there. Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people. . . .

[I]t can be said that we are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society — for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path — a path that allows us all to benefit from the digital revolution

Oh, and for a concrete illustration of all the above, check this out:

How would behavioural and social control impact our lives? The concept of a Citizen Score, which is now being implemented in China, gives an idea. There, all citizens are rated on a one-dimensional ranking scale. Everything they do gives plus or minus points. This is not only aimed at mass surveillance. The score depends on an individual’s clicks on the Internet and their politically-correct conduct or not, and it determines their credit terms, their access to certain jobs, and travel visas. Therefore, the Citizen Score is about behavioural and social control. Even the behaviour of friends and acquaintances affects this score, i.e. the principle of clan liability is also applied: everyone becomes both a guardian of virtue and a kind of snooping informant, at the same time; unorthodox thinkers are isolated. Were similar principles to spread in democratic countries, it would be ultimately irrelevant whether it was the state or influential companies that set the rules. In both cases, the pillars of democracy would be directly threatened.

FULL ARTICLE: Will Democracy Survive Big Data and Artificial Intelligence?

Of course, none of this is real news to anybody who has been paying attention. It’s just something that people like me, and maybe like you, find troubling enough to highlight and comment on. And maybe, in the end, Cipher from The Matrix will turn out to have been right: Maybe ignorance really is bliss. Because from where I’m sitting, there doesn’t appear to be anything one can do to stop this streamrollering, metastasizing, runaway train-like dystopian trend. Talking about it is just that: talk. Which is one reason why I’ve lost a portion of the will that originally kept me blogging here for so many years. You can only play the role of Cassandra for so long before the intrinsic attraction begins to dissipate. Read the rest of this entry

Strangling imagination: Science as a form of mental illness

Here’s a generous chunk of a really interesting and incisive blog post by author and Presbyterian pastor C. R. Wiley, who has been articulating interesting and incisive thoughts on religion, science, culture, Lovecraft, C. S. Lewis, and an associated network of ideas and writers for a some time now:

For [my scientist friends] the imagination is just a tool for problem solving. It’s not a window to view the real world through; it’s more a technique for envisioning ways out of conceptual impasses.

They’re unable to get past the factness of things. Meaning eludes them. . . .

When I ask my scientific friends, “what does it say?” (referring to any work of art) they look at me blankly. They seem to be unable to move from facts to meanings. Worse, they reduce meaning to facts in some sense. There’s a savanna theory for instance, which asserts with darwinian certitude that the reason some landscapes seem beautiful to us is because our prehistoric ancestors found savannas conducive to survival. (Darwinians have the same answer for everything, what C. S. Lewis is said to have called, “nothing-butterism”, meaning, whatever you think is the case can be reduced to “nothing but” survival.)

Seeing that the scientific method is a fairly recent phenomenon and we’ve had interest in meaning of things from the very beginning of recorded history, what is going on here?

I can’t help but believe something has gone wrong, that in the interest of understanding the world we’ve lost the world. The world is reduced to cause and effect, but its meaning is something we can no longer see.

FULL TEXT: “Is the Scientific Method a Form of Mental Illness?

You may recall Wiley as the impetus behind one of the more popular posts here at The Teeming Brain in the past few years, “C. S. Lewis and H. P. Lovecraft on loathing and longing for alien worlds.” He’s well worth following. (For a relevant case in point, see his March blog post “H. P. Lovecraft, Evangelist of the Sublime.”)

For more on the mental illness that is scientism and the threat it poses to authentic imagination, see the following:

 

‘Mummies around the World’ is a ‘truly rollicking blend of scientific and pop culture’

 

Mummies_around_the_World

During a week when mummies are on everybody’s mind because of that widely circulated news story about the mummified Buddhist monk found in a Buddha statue, it’s nice to see that Library Journal has posted a review of my recently published Mummies around the World, which contains a long entry titled “Buddhist self-mummification” that’s written by Ron Beckett and Jerry Conlogue, the scientists and mummy experts who used to host National Geographic’s Mummy Road Show.

LJ’s verdict, I’m pleased to say, is positive:

This truly rollicking blend of scientific and pop culture offers facts ranging from actual methods of mummification to an entry on the 1955 movie Abbott and Costello Meet the Mummy and includes trivia-fact boxes. VERDICT: An educational and entertaining compendium that is recommended for all “mummymaniacs” everywhere.

MORE: Review of Mummies around the World (scroll to the bottom of the page)

Remember, you can find the book in your local public, high school, or college library. You can also order it from Amazon, the publisher, and pretty much everyplace else.

Preview of ‘Mummies around the World’ now available

Mummies_around_the_World

A Google Books preview of my mummy encyclopedia is now available. At least from my end — and I know these previews tend to shift and alter sometimes — it shows the full table of contents (two of them, actually, one alphabetical and the other topically organized), the full preface and introduction, portions of the master timeline of mummies throughout history, and a few snippets of the book’s A-Z entries. For those of you who have been following my updates about this project over the past couple of years, here’s a glimpse of the final result.

The book is scheduled for publication on November 30. You can order it from the publisher or from all of the usual retail suspects (Amazon, Barnes & Noble, etc.). You’ll probably also find it in a library near you. And remember, you can view a full list of the book’s contributors with brief bios here.

To reject philosophy is to embrace the Matrix

Matrix_Code

I had considered titling this post “Philosophy slams Neil deGrasse Tyson,” but then I reconsidered. In case you haven’t heard, Tyson recently outed himself as a philistine. Or at least that’s how author and journalist Damon Linker characterizes it in an article titled, appropriately enough, “Why Neil deGrasse Tyson Is a Philistine.” In the words of the article’s teaser, “The popular television host says he has no time for deep, philosophical questions. That’s a horrible message to send to young scientists.”

What Linker is referring to is Tyson’s recent appearance as a guest on the popular Nerdist podcast. Beginning at about 20 minutes into the hour-long program, the conversation between Tyson and his multiple interviewers turns to the subject of philosophy, and Tyson speaks up to talk down the entire field. In fact, he takes pains to specify and clarify that he personally has absolutely no use for philosophy, which he views as a worthless distraction from other activities with real value.

Yes, it all sounds like it must be overstated in the retelling — but in point of fact, it’s not. Have a listen for yourself by clicking the link above, or else read his words here in this transcript of the program’s relevant portion. The comments from Tyson and his interviewers come right after they have been discussing the standardization of weights and measures. Note especially how Tyson not only dismisses philosophy but pointedly refuses to allow that there might be even a shred of validity or value in it. Read the rest of this entry

Invented lifeform: Behold the Strandbeest

Strandbeest

The above image is a photo of a Strandbeest. What, you may ask, is that? Here’s how its creator, the Dutch artist Theo Jansen (who can be seen in the photo as well), explains the matter:

Since 1990 I have been occupied creating new forms of life. Not pollen or seeds but plastic yellow tubes are used as the basic material of this new nature. I make skeletons that are able to walk on the wind, so they don’t have to eat. Over time, these skeletons have become increasingly better at surviving the elements such as storms and water, and eventually I want to put these animals out in herds on the beaches, so they will live their own lives.

If you wonder what this actual entails and looks like in action, see the video below. Be advised that it will probably stand as the coolest and most mind-blowing thing you’ll see all week, month, or maybe year:

Last summer Jansen visited the Peabody Essex Museum in Salem, Massachusetts, in preparation for the first major American exhibition of his work, which will be presented at the PEM in 2015 and titled “The Dream of the Strandbeest.” My sister Dinah is a writer for PEM, and here’s how she described his visit:

Prior to meeting the man behind the Strandbeest, my introduction was the same as most — gazing at online videos of the enormous beach-combing beasts, while trying to teleport myself to that peaceful beach in the Netherlands. From the first moment I saw the lifelike creatures walking their four-legged dog pace, I wondered whether the God-like figure behind these post-apocalyptic-looking critters could likely change the world.

. . . In a roomful of PEM staff, Jansen shared how a Strandbeest works with pistons that act like muscles. Constructed of plastic tubes and recycled water bottles, the creature has a purpose beyond its more obvious one of being beautiful and mysterious. They are built to harness wind power and save eroding beaches. They detect atmospheric pressure and are designed to “pin themselves to the ground” to survive storms. Jansen spends his mornings coming up with difficult algorithms in the workshop, before biking 50 kilometers to the beach to try them out. By the end of the day, he said, the design works or it doesn’t. “The tubes point you in a certain way,” he says. “I’m surprised by how beautiful they are.”

. . . Jansen recently shared the genetic code of the Strandbeest on the web and is proud of the resulting designs in wood, out of Legos, in materials imagined by children and adults, so that the average person can be “infected” with the compulsion to create a Strandbeest. This is how they masterfully reproduce, he points out, adding that he eventually wants to put them out on the beach in herds, so that they can live on their own.

“Maybe it’s only a fairytale in my head . . . a surviving animal on the beach,” he said. “These are all designed for that. Maybe before I die, these animals will be there. This is my horizon, you could say.”

MORE: “Stunning Strandbeests

For more about Jansen and the Strandbeests, see these write-ups and profiles from NPR, The New Yorker, and The New York Times.

Image by Roel via Flickr under Creative Commons