You are currently browsing the category archive for the ‘Ideas’ category.


Steve Jobs … the Apple CEO shows an image of the new storage centre for iCloud at the Worldwide Developers Conference in San Francisco, June 2011. Photograph: Marcio Jose Sanchez/AP Photo

Perhaps the funniest passage in Walter Isaacson’s monumental book about Steve Jobs comes three quarters of the way through. It is 2009 and Jobs is recovering from a liver transplant and pneumonia. At one point the pulmonologist tries to put a mask over his face when he is deeply sedated. Jobs rips it off and mumbles that he hates the design and refuses to wear it. Though barely able to speak, he orders them to bring five different options for the mask so that he can pick a design he likes. Even in the depths of his hallucinations, Jobs was a control-freak and a rude sod to boot. Imagine what he was like in the pink of health. As it happens, you don’t need to: every discoverable fact about how Jobs, ahem, coaxed excellence from his co-workers is here.

As Isaacson makes clear, Jobs wasn’t a visionary or even a particularly talented electronic engineer. But he was a businessman of astonishing flair and focus, a marketing genius, and – when he was getting it right, which wasn’t always – had an intuitive sense of what the customer would want before the customer had any idea. He was obsessed with the products, rather than with the money: happily, as he discovered, if you get the products right, the money will come.

Isaacson’s book is studded with moments that make you go “wow”. There’s the Apple flotation, which made the 25-year-old Jobs $256m in the days when that was a lot of money. There’s his turnaround of the company after he returned as CEO in 1997: in the previous fiscal year the company lost $1.04bn, but he returned it to profit in his first quarter. There’s the launch of the iTunes store: expected to sell a million songs in six months, it sold a million songs in six days.

More

Sam Leith
Guardian


People walk onto the Bay Bridge before it was reopened to traffic after the Loma Prieta earthquake. Parts of the structure had collapsed in the quake. The new eastern span should be finished in 2013.

As planners and bureaucrats start making the case for putting a park at the foot of the new eastern span of the Bay Bridge, they’re urging us to be ambitious and think big thoughts.

In that spirit I offer my own idea to mark the transition from old to new: Let’s leave one section of the span right where it is, and make it a showcase of renewable energy.

This wouldn’t change what is planned for completion in 2013 (keep your fingers crossed). We’d still have our new eastern span with its twin viaducts extending west from Oakland to a single attention-getting tower and then to Yerba Buena Island.

But we’d also have something that even in this age of global wonders is genuinely unique: a single bridge section, 508 feet long. rising from the waters of the bay, a trussed weave of thick steel perched atop X-braced piers.

More

John King
San Francisco Chronicle


Map of the Internet by Bar Ilan University (Photo credit: Lanet-vi program of I. Alvarez-Hamelin et al)

Shortened attention span. Less interest in reflection and introspection. Inability to engage in in-depth thought. Fragmented, distracted thinking.

The ways the Internet supposedly affects thought are as apocalyptic as they are speculative, since all the above are supported by anecdote, not empirical data. So it is refreshing to hear how 109 philosophers, neurobiologists, and other scholars answered, “How is the Internet changing the way you think?” That is the “annual question” at the online salon edge.org, where every year science impresario, author, and literary agent John Brockman poses a puzzler for his flock of scientists and other thinkers.

Although a number of contributors drivel on about, say, how much time they waste on e-mail, the most striking thing about the 50-plus answers is that scholars who study the mind and the brain, and who therefore seem best equipped to figure out how the Internet alters thought, shoot down the very idea. “The Internet hasn’t changed the way we think,” argues neuroscientist Joshua Greene of Harvard. It “has provided us with unprecedented access to information, but it hasn’t changed what [our brains] do with it.” Cognitive psychologist Steven Pinker of Harvard is also skeptical. “Electronic media aren’t going to revamp the brain’s mechanisms of information processing,” he writes. “Texters, surfers, and twitterers” have not trained their brains “to process multiple streams of novel information in parallel,” as is commonly asserted but refuted by research, and claims to the contrary “are propelled by … the pressure on pundits to announce that this or that ‘changes everything.’ ”

More

Sharon Begley
Newsweek

mmw_curiosity_1109_article
Todd Kashdan has a deep appreciation of anxiety, which makes his engaging book “Curious?” unique among the comfort-promising volumes in the self-help section.

For most of us, anxiety is a decidedly unpleasant emotion — one we strive to avert, either by avoiding situations that provoke apprehension, latching onto false but comforting certainties, or (my personal favorite) numbing out via our addiction of choice. Pointing out anxiety’s usefulness is akin to putting in a good word for pain.

But of course, it’s not the anxiety itself that causes problems but those dysfunctional coping mechanisms. As the George Mason University psychologist [Todd Kashdan] notes, anxiety is in fact one-half of a quite useful yin-yang process. Rather than resist it, he argues, we should acknowledge its existence and turn up the volume on the other side of the equation: the impulse that pulls us toward challenge and exploration.

That is to say, we need to cultivate curiosity.

“Our curiosity and threat detection systems evolved together, and they function to ensure optimal decisions are made in an unpredictable, uncertain world,” he writes. “We are all motivated by the pull toward safety and seek to avoid danger, but we also possess a fundamental motivation to expand and grow as human beings.”

More

Tom Jacobs
Miller-McCune

perfection_m1392519

Perfectionism, as a way of life, tends to be self-defeating. New research suggests it may also be deadly.

That’s the conclusion of a Canadian study of senior citizens just published in the Journal of Health Psychology. Researchers conducted psychological tests on 450 elderly residents of southern Alberta, and then kept tabs on them for 6½ years. During that period, just over 30 percent of the subjects, who ranged in age from 65 to 87, died.

Perfectionists — that is, those who expressed “a strong motivation to be perfect” and revealed a tendency toward “all or nothing thinking” — were approximately 51 percent more likely to have died during the life of the study than those with more reasonable self-expectations. Those who were rated high on neuroticism — for instance, those who reported often feeling tense — did even worse: Their risk of death nearly doubled compared with those with a more relaxed disposition.

In contrast, “risk of death was significantly lower for high scorers in conscientiousness, extraversion and optimism,” reports lead author Prem S. Fry, a research psychologist at British Columbia’s Trinity Western University. She notes that previous research has found that “perfectionism exerts a great deal of stress on health,” while optimism “is viewed as a stress-alleviating factor.”

“In short, our findings confirmed that conscientiousness and extraversion are health-related dimensions that are enabling in their effects, and perfectionism and neuroticism are disabling,” she concludes. “It is noteworthy that these associations endure well into late life.”

The findings have interesting implications for seniors’ health care providers and caregivers. They suggest physicians and family members are well-advised to be vigilant in noticing perfectionist tendencies, and understanding of the physical and psychological toll they can take.

The desire to pursue a favorite task or hobby at the same high level one achieved in previous years is very understandable, and in many ways commendable. But at the same time, it’s important to be cognizant of the stress such an effort can produce and the negative health effects that can result.

Tom Jacobs
Miller-McCune

main41

We human beings have a long history of proposing theories to unify disparate truths. This yearning to find a transcendent meaning for separate bodies of evidence may be one of our distinguishing traits. You have probably noticed this impulse in your own life: a series of experiences prompts the sense that something is hidden in the bundle of them. Your inner smarts work on the challenge—rationally, via various unconscious processes, and even while sleeping. The “Aha!” moment of identifying the deeper pattern in the evidence is satisfying and joyful; it launches a whole new set of possibilities for you as a person, as an artist.

I see the separate disciplines and fields within the arts and arts learning in that light because, although they seem to comprise disparate bodies of truth, my gut tells me that meaningful, unifying, common truths await, hidden in plain sight. Truths, that when embraced, can change the status quo.

You would be hard pressed to argue that we are a unified field. Practitioners of different art forms just don’t think of themselves as part of a larger functional entity. Even though multidisciplinary performances and presentations are increasingly common, the various artistic tribes compete more often than they cooperate, believing that the concerns they share are less significant than the ones they face on their own. A regional theater company looks at a choral ensemble and does not see much resemblance; a string quartet looks at a small dance ensemble or a struggling art gallery and does not see itself mirrored there.

Likewise, the divisions within arts education never seem to resolve. We waste energy on the same familial tiffs we have had for decades: disciplinary instruction vs. arts integration, arts education for art’s sake vs. arts education to produce other benefits, certified arts instructors vs. teaching artists, in-school learning vs. all the learning that happens outside of school—and what about the granny who plays the ukulele? These old hostilities, prejudices, and cross-purposes persist within a culture of scarcity, eroding the expansive, inclusive impulses that got us into arts-learning in the first place.

As a consultant, I have had many opportunities to try to build local arts partnerships and consortia; the usual strategy is to identify common goals and thereby foster a joint commitment to actions that will lift all the organizational boats together. Sometimes progress is made, and there are inspiring examples of success in a few cities; more often, the separateness of the participants is palpable and pervasive, caution and distrust remain entrenched, and the proposed partners have no shared language. This last point takes a while to surface, and is hard to admit—each doesn’t really know what the other is talking about, or the separate fields don’t agree on some fundamental point. You don’t believe me? Try discussing with an artist from another discipline what you think creativity really is.

The current painful economic constriction may be the catalyst we need to change our habits of thinking and jump us out of our ruts. As Rahm Emmanuel said when he was appointed White House Chief of Staff: “A crisis is too good an opportunity to waste.”

More

Eric Booth
Springboard for the Arts

Most great stories revolve around decisions: the snap brilliance of Captain Sullenberger choosing to land his plane in the Hudson, or Dorothea’s prolonged, agonizing choice of whether to forsake her husband for true love in “Middlemarch,” or your parents’ oft-told account of the day they decided to marry. There is something powerfully human in the act of deliberately choosing a path; other animals have drives, emotions, problem-solving skills, but none rival our capacity for self-consciously weighing all the options, imagining potential outcomes and arriving at a choice. As George W. Bush might have put it, we are a species of deciders.

Jonah Lehrer’s engaging new book, “How We Decide,” puts our decision-making skills under the microscope. At 27, Lehrer is something of a popular science prodigy, having already published, in 2007, “Proust Was a Neuroscientist,” which argued that great artists anticipated the insights of modern brain science. “How We Decide” tilts more decisively in the thinking-­person’s self-help direction, promising not only to explain how we decide, but also to help us do it better.

This is not exactly uncharted terrain. Early on, Lehrer introduces his main theme: “Sometimes we need to reason through our options and carefully analyze the possibilities. And sometimes we need to listen to our emotions.” Most readers at this point, I suspect, will naturally think of Malcolm Gladwell’s mega-best-seller “Blink,” which explored a similar boundary between reason and intuition. But a key difference between the two books quickly emerges: Gladwell’s book took an external vantage point on its subject, drawing largely on observations from psychology and sociology, while Lehrer’s is an inside job, zooming in on the inner workings of the brain. We learn about the nucleus accumbens, spindle cells and the prefrontal cortex. Many of the experiments he recounts involve fMRI scans of brains in the process of making decisions (which, for the record, is a little like making a decision with your head stuck in a spinning clothes dryer).

Explaining decision-making on the scale of neurons makes for a challenging task, but Lehrer handles it with confidence and grace. As an introduction to the cognitive struggle between the brain’s “executive” rational centers and its more intuitive regions, “How We Decide” succeeds with great panache, though readers of other popular books on this subject (Antonio Damasio’s “Descartes’ Error” and Daniel Goleman’s “Emotional Intelligence,” for example) will be familiar with a number of the classic experiments Lehrer describes.

In part, the neuroscience medicine goes down so smoothly because Lehrer introduces each concept with an arresting anecdote from a diverse array of fields: Tom Brady making a memorable pass in the 2002 Super Bowl; a Stanford particle physicist nearly winning the World Series of Poker; Al Haynes, the Sully of 1989, making a remarkable crash landing of a jetliner whose hydraulic system had failed entirely. The anecdotes are, without exception, well chosen and artfully told, but there is something in the structure of this kind of nonfiction writing that is starting to feel a little formulaic: startling mini-narrative, followed by an explanation of What the Science Can Teach Us, capped by a return to the original narrative with some crucial mystery unlocked. (I say this as someone who has used the device in my own books.) It may well be that this is simply the most effective way to convey these kinds of ideas to a lay audience. But part of me hopes that a writer as gifted as Lehrer will help push us into some new formal technique in future efforts.

A book that promises to improve our decision-making, however, should be judged on more than its narrative devices. The central question with one like “How We Decide” is, Do you get something out of it? It’s fascinating to learn about the reward circuitry of the brain, but on some basic level, we know that we seek out rewards and feel depressed when we don’t get them. Learning that this process is modulated by the neurochemical dopamine doesn’t, on the face of it, help us in our pursuit of those rewards. But Lehrer’s insights, fortunately, go well beyond the name-that-neurotransmitter trivia. He’s insightful and engaging on “negativity bias” and “loss aversion”: the propensity of the human brain to register bad news more strongly than good. (Negativity bias, for instance, explains why in the average marital relationship it takes five compliments to make up for a single cutting remark.) He has a wonderful section on creativity and working memory, which ends with the lovely epigram: “From the perspective of the brain, new ideas are merely several old thoughts that occur at the exact same time.”

For this reader, though, the most provocative sections of “How We Decide” involve sociopolitical issues more than personal ones. A recurring theme is how certain innate bugs in our decision-­making apparatus led to our current financial crisis. We may be heavily “loss averse,” but only in the short run: a long list of experiments have shown that completely distinct parts of the brain are activated if the potential loss lies in the mid- or long-term future, making us more susceptible to the siren song of the LCD TV or McMansion. So many of the financial schemes that led us astray over the past decade exploit precisely these defects in our decision-making tools. “Paying with plastic fundamentally changes the way we spend money, altering the calculus of our financial decisions,” Lehrer writes. “When you buy something with cash, the purchase involves an actual loss — your wallet is literally lighter. Credit cards, however, make the transaction abstract.” Proust may have been a neuroscientist, but so were the subprime mortgage lenders. These are scientific insights that should be instructive to us as individuals, of course, but they also have great import to us as a society, as we think about the new forms of regulation that are going to have to be invented in the coming years to prevent another crisis.

“How We Decide” has one odd omission. For a book that plumbs the mysteries of the emotional brain, it has almost nothing to say about the decisions that most of us would conventionally describe as “emotional.” We hear about aviation heroism and poker strategies, and we hear numerous accounts of buying consumer goods. But there’s barely a mention of a whole class of choices that are suffused with emotion: whether to break up with a longstanding partner, or to scold a disobedient child, or to let an old friend know that you feel betrayed by something he’s said. For most of us, I suspect, these are the decisions that matter the most in our lives, and yet “How We Decide” is strangely silent about them. Perhaps Jonah Lehrer will use his considerable talents to tackle these most human of decisions in another volume. Until then, we’ve still got ­“Middlemarch.”

Steven Johnson
New York Times

obamadoodle540
President Obama’s doodle, sketched as part of a “National Doodle Day” to benefit the charity Neurofibromatosis, contains likenesses of Senate Majority Leader Harry Reid (D-NV) and Democratic colleagues Edward Kennedy of Massachusetts, Dianne Feinstein of California and Charles Schumer of New York. Courtesy Wayne Berzon/Neurofibromatosis Inc.

Four years ago at Davos, the famous world economic forum, then-Prime Minister Tony Blair appeared on a panel with Bill Gates, Bill Clinton and the rock star Bono. After the panel, a journalist wandering the stage came across some papers scattered near Blair’s seat. The papers were covered in doodles: circles and triangles, boxes and arrows.

“Your standard meeting doodles,” says David Greenberg, professor of journalism at Rutgers University.

So this journalist brought his prize to a graphologist who, after careful study, drew some pretty disturbing conclusions. According to experts quoted in the Independent and The Times, the prime minister was clearly “struggling to maintain control in a confusing world” and “is not rooted.” Worse, Blair was apparently, “not a natural leader, but more of a spiritual person, like a vicar.”

Two other major British newspapers, which had also somehow gotten access to the doodles, came to similar conclusions.

A couple days later, No. 10 Downing Street finally weighed in. It had done a full and thorough investigation and had an important announcement to make:

The doodles were not made by Blair; they were made by Bill Gates. Gates had left them in the next seat over.

Oodles Of Doodles

Gates is a doodler, and he’s not alone. Lyndon Johnson doodled. Ralph Waldo Emerson doodled. Ronald Reagan drew pictures of cowboys, horses and hearts crossed with arrows. Most of us doodle at one point or another. But why?

To understand where the compulsion to doodle comes from, the first thing you need to do is look more closely at what happens to the brain when it becomes bored. According to Jackie Andrade, a professor of psychology at the University of Plymouth, though many people assume that the brain is inactive when they’re bored, the reverse is actually true.

“If you look at people’s brain function when they’re bored, we find that they are using a lot of energy — their brains are very active,” Andrade says.

The reason, she explains, is that the brain is designed to constantly process information. But when the brain finds an environment barren of stimulating information, it’s a problem.

“You wouldn’t want the brain to just switch off, because a bear might walk up behind you and attack you; you need to be on the lookout for something happening,” Andrade says.

So when the brain lacks sufficient stimulation, it essentially goes on the prowl and scavenges for something to think about. Typically what happens in this situation is that the brain ends up manufacturing its own material.

In other words, the brain turns to daydreams, fantasies of Oscar acceptance speeches and million-dollar lottery wins. But those daydreams take up an enormous amount of energy.

Ergo The Doodle

This brings us back to doodling. The function of doodling, according to Andrade, who recently published a study on doodling in Applied Cognitive Psychology, is to provide just enough cognitive stimulation during an otherwise boring task to prevent the mind from taking the more radical step of totally opting out of the situation and running off into a fantasy world.

Andrade tested her theory by playing a lengthy and boring tape of a telephone message to a collection of people, only half of whom had been given a doodling task. After the tape ended she quizzed them on what they had retained and found that the doodlers remembered much more than the nondoodlers.

“They remembered about 29 percent more information from the tape than the people who were just listening to the tape,” Andrade says.

In other words, doodling doesn’t detract from concentration; it can help by diminishing the need to resort to daydreams.

It’s a very good strategy for the next time you find yourself stuck on a slow-moving panel with an aging rock star and verbose former president.

Alix Spiegel
NPR

Interesting notes from social media researcher danah boyd (she seems to prefer lower case letters) from her presentation to Microsoft researchers last month. While many businesses in the arts and elsewhere are seeking tactics and strategies for using social media, Ms. Boyd is exploring the intersection of these technologies with the core dynamics of human interaction.

In these comments, she traces the past, present, and future of social media systems (like Facebook, MySpace, and the like), and she highlights three dynamics that are emerging as a result of their use. Says she:

1. Invisible Audiences. We are used to being able to assess the people around us when we’re speaking. We adjust what we’re saying to account for the audience. Social media introduces all sorts of invisible audiences. There are lurkers who are present at the moment but whom we cannot see, but there are also visitors who access our content at a later date or in a different environment than where we first produced them. As a result, we are having to present ourselves and communicate without fully understanding the potential or actual audience. The potential invisible audiences can be stifling. Of course, there’s plenty of room to put your head in the sand and pretend like those people don’t really exist.
2. Collapsed Contexts. Connected to this is the collapsing of contexts. In choosing what to say when, we account for both the audience and the context more generally. Some behaviors are appropriate in one context but not another, in front of one audience but not others. Social media brings all of these contexts crashing into one another and it’s often difficult to figure out what’s appropriate, let alone what can be understood.
3. Blurring of Public and Private. Finally, there’s the blurring of public and private. These distinctions are normally structured around audience and context with certain places or conversations being “public” or “private.” These distinctions are much harder to manage when you have to contend with the shifts in how the environment is organized.

Useful stuff, whether you’ve dived into Facebook as an individual or an organization, or you’re noticing that your audience is already in the pool.

Andrew Taylor
The Artful Manager

It’s the rare manager who doesn’t partake in quarterly or annual goal-setting exercises. And woe to those who don’t make their goals SMART (Specific, Measurable, Attainable, Realistic, Timely).

But do these goals really work? Researchers from four top business schools have collaborated to show that in many cases goals do more harm than good. Worse, they can cause real damage to organizations and individuals using them.

“We argue that the beneficial effects of goal setting have been overstated and that systematic harm caused by goal setting has been largely ignored,” the researchers conclude. Bad “side effects” produced by goal-setting programs include a rise in unethical behavior, over-focus on one area while neglecting other parts of the business, distorted risk preferences, corrosion of organizational culture, and reduced intrinsic motivation.

One example: the explosive Ford Pinto. Presented with a goal to build a car “under 2,000 pounds and under $2,000” by 1970, employees overlooked safety testing and designed a car where the gas tank was vulnerable to explosion from rear-end collisions. Fifty-three people died as a result.

Used wisely, goals can inspire employees and improve performance, the authors agree. But goal setting must be prescribed in doses, not as a standard remedy to increase productivity. They even offer a warning label and list 10 questions managers should ask themselves before starting goal setting.

Sean Silverthorne
Harvard Business School Working Knowledge