You are currently browsing the category archive for the ‘Nature’ category.
Photographer Cedric Pollet travels the world, barking up trees for a living.
Our memory is like an ear of corn. At least, that’s what Valerie Reyna was taught in graduate school.
Its Forrest Gumpish feel notwithstanding, the metaphor seemed scientifically sound. After all, researchers had already concluded there are two distinct types of memory: Verbatim, which allows us to recall what specifically happened at any given moment, and gist, which enables us to put the event in context and give it meaning.
“We were taught you extracted the gist from the verbatim memory,” recalled Reyna, an experimental psychologist and former senior research adviser to the U.S. Department of Education. “It was like husking an ear of corn. You threw away the husk, which was the verbatim, and you kept the gist, which was the kernel of meaning.”
There it was: Neat. Simple. Agrarian.
And also, as Reyna discovered over decades of subsequent research, wrong.
After conducting numerous studies with her partner, psychologist Charles Brainerd, Reyna concluded that verbatim and gist memory are separate, parallel systems. So separate, in fact, that “there is some evidence” they occupy different sections of the brain.
Reyna and Brainerd’s hypothesis, which they call “fuzzy trace theory,” explains how we can “remember” things that never really happened.
When an event occurs, verbatim memory records an accurate representation. But even as it is doing so, gist memory begins processing the information and determining how it fits into our existing storehouse of knowledge. Verbatim memories generally die away within a day or two, leaving only the gist memory, which records the event as we interpreted it.
Under certain circumstances, this can produce a phenomenon Reyna and her colleagues refer to as “phantom recollection.” She calls this “a powerful form of false alarm” in which gist memory — designed to look for patterns and fill in perceived gaps —creates a vivid but illusory image in our mind.
Mental snapshots soon fade; what lingers are our impressions of an occurrence, which are shaped by the meanings we attach to it…
“We’re looking at a number of things, including the effect of emotion on memory — how emotion interacts with your interpretation of events,” Reyna said. “Does arousal interfere with your encoding of memory? Does it ‘stamp it in,’ as some of the neuroscience literature suggests? The effect might be more complex than that.”
One question that can’t be answered in the lab is why, in evolutionary terms, we would develop two separate memory systems. Reyna, who has given this considerable thought, noted that if all we had was our rapidly fading verbatim memory, “it would be very hard to function — especially in an oral culture. Cognition appears to be engineered around gist memory, which endures and is stable.”
Consider the case of one of our prehistoric ancestors who is attacked by a saber-toothed tiger but manages to escape before being eaten. Verbatim memory would tell him precisely where the altercation took place, exactly what the tiger looked like and what tree he climbed to get beyond the animal’s reach. Gist memory would tell him: “Tigers are dangerous. If I go walking in the forest after dark, I’d better bring my spear.”
The first would be interesting; the second, essential. As Reyna wryly noted, “You don’t have to count the stripes to know the tiger is bad.”
Change blindness [is] the frequent inability of our visual system to detect alterations to something staring us straight in the face. The changes needn’t be as modest as a switching of paint chips. At the same meeting…the audience failed to notice entire stories disappearing from buildings, or the fact that one poor chicken in a field of dancing cartoon hens had suddenly exploded. In an interview, Dr. Wolfe [of Harvard Medical School] also recalled a series of experiments in which pedestrians giving directions to a Cornell researcher posing as a lost tourist didn’t notice when, midway through the exchange, the sham tourist was replaced by another person altogether.
Beyond its entertainment value, symposium participants made clear, change blindness is a salient piece in the larger puzzle of visual attentiveness. What is the difference between seeing a scene casually and automatically, as in, you’re at the window and you glance outside at the same old streetscape and nothing registers, versus the focused seeing you’d do if you glanced outside and noticed a sign in the window of your favorite restaurant, and oh no, it’s going out of business because, let’s face it, you always have that Typhoid Mary effect on things. In both cases the same sensory information, the same photonic stream from the external world, is falling on the retinal tissue of your eyes, but the information is processed very differently from one eyeful to the next. What is that difference? At what stage in the complex circuitry of sight do attentiveness and awareness arise, and what happens to other objects in the visual field once a particular object has been designated worthy of a further despairing stare?
Visual attentiveness is born of limited resources. “The basic problem is that far more information lands on your eyes than you can possibly analyze and still end up with a reasonable sized brain,” Dr. Wolfe said. Hence, the brain has evolved mechanisms for combating data overload, allowing large rivers of data to pass along optical and cortical corridors almost entirely unassimilated, and peeling off selected data for a close, careful view. In deciding what to focus on, the brain essentially shines a spotlight from place to place, a rapid, sweeping search that takes in maybe 30 or 40 objects per second, the survey accompanied by a multitude of body movements of which we are barely aware: the darting of the eyes, the constant tiny twists of the torso and neck. We scan and sweep and perfunctorily police, until something sticks out and brings our bouncing cones to a halt.
The mechanisms that succeed in seizing our sightline fall into two basic classes: bottom up and top down. Bottom-up attentiveness originates with the stimulus, with something in our visual field that is the optical equivalent of a shout: a wildly waving hand, a bright red object against a green field. Bottom-up stimuli seem to head straight for the brainstem and are almost impossible to ignore, said Nancy Kanwisher, a vision researcher at M.I.T., and thus they are popular in Internet ads.
Top-down attentiveness, by comparison, is a volitional act, the decision by the viewer that an item, even in the absence of flapping parts or strobe lights, is nonetheless a sight to behold. When you are looking for a specific object — say, your black suitcase on a moving baggage carousel occupied largely by black suitcases — you apply a top-down approach, the bouncing searchlights configured to specific parameters, like a smallish, scuffed black suitcase with one broken wheel. Volitional attentiveness is much trickier to study than is a simple response to a stimulus, yet scientists have made progress through improved brain-scanning technology and the ability to measure the firing patterns of specific neurons or the synchronized firing of clusters of brain cells.
Recent studies with both macaques and humans indicate that attentiveness crackles through the brain along vast, multifocal, transcortical loops, leaping to life in regions at the back of the brain, in the primary visual cortex that engages with the world, proceeding forward into frontal lobes where higher cognitive analysis occurs, and then doubling back to the primary visual centers. En route, the initial signal is amplified, italicized and annotated, and so persuasively that the boosted signal seems to emanate from the object itself. The enhancer effect explains why, if you’ve ever looked at a crowd photo and had somebody point out the face of, say, a young Franklin Roosevelt or George Clooney in the throng, the celebrity’s image will leap out at you thereafter as though lighted from behind.
Whether lured into attentiveness by a bottom-up or top-down mechanism, scientists said, the results of change blindness studies and other experiments strongly suggest that the visual system can focus on only one or very few objects at a time, and that anything lying outside a given moment’s cone of interest gets short shrift. The brain, it seems, is a master at filling gaps and making do, of compiling a cohesive portrait of reality based on a flickering view.
“Our spotlight of attention is grabbing objects at such a fast rate that introspectively it feels like you’re recognizing many things at once,” Dr. Wolfe said. “But the reality is that you are only accurately representing the state of one or a few objects at any given moment.” As for the rest of our visual experience, he said, it has been aptly called “a grand illusion.” Sit back, relax and enjoy the movie called You.
New York Times
Scientists have developed a computerised mind-reading technique which lets them accurately predict the images that people are looking at by using scanners to study brain activity.
The breakthrough by American scientists took MRI scanning equipment normally used in hospital diagnosis to observe patterns of brain activity when a subject examined a range of black and white photographs. Then a computer was able to correctly predict in nine out of 10 cases which image people were focused on. Guesswork would have been accurate only eight times in every 1,000 attempts…
Gallant said it might be possible in future to apply the technology to visual memories or dreams. “Probably the visual hardware is engaged and stuff from memory is sort of downloaded into your visual hardware and then replayed,” he said. “To the extent that that is true, we should be able to reconstruct imagery in dreams.”
On a cold January afternoon in this tiny village near the German border, the garden designer Piet Oudolf put on a heavy coat and led the way out of the 1850s farmhouse he shares with his wife, Anja, and into his garden. After a few steps he stopped and pointed with pride at a stalk of dead fennel standing in a bed of moribund, wheat-colored joe-pye weed. “Normally, people who garden would have cut this back by now,” he said. “The skeletons of the plants are for me as important as the flowers.”
For Mr. Oudolf, in fact, the real test of a well-composed garden is not how nicely it blooms but how beautifully it decomposes. “It’s not about life or death,” he said, admiring the dark, twisting lines of the fennel. “It’s about looking good.”
Over three decades, Mr. Oudolf’s sometimes unconventional ideas about what looks good have helped make him a star in Europe —where his work has inspired an “ecology meets design” gardening movement called New Wave Planting by its followers — and have also begun to win him fans and jobs in the United States. He has done the planting design for important new gardens in Millennium Park in Chicago and the Battery in New York, and for the park that will cover the elevated High Line rail bed in Lower Manhattan when it opens in September. These landscapes, like all his projects, embody and advertise his fundamental aesthetic doctrine: that a plant’s structure and form are more important than its color.
“He’s gotten away from the soft pornography of the flower,” said Charles Waldheim, the director of the landscape architecture program at the University of Toronto. “He’s interested in the life cycle, how plant material ages over the course of the year,” and how it relates to the plants around it. Like a good marriage, his compositions must work well together as its members age.
“Most people think in a formal way: if you put A and B with C, it will look like this — but only at a certain moment in time,” said James Corner, chairman of the department of landscape architecture at the University of Pennsylvania and director of Field Operations, the New York landscape design firm working on the High Line with Mr. Oudolf and the architecture office of Diller Scofidio & Renfro. Mr. Corner said that one reason he asked Mr. Oudolf to do the project’s planting design is that the way he selects and composes plants “is thought through not only in terms of summer, but also in terms of winter — all 12 months are interesting.”
New York Times