You are currently browsing the category archive for the ‘Technology’ category.
Baseball practice in Montgomery County, Maryland. Photograph: Tomas van Houtryve
When photographer Tomas van Houtryve shows people his picture of a yoga class mid-pose in a San Francisco public park, half see people practising yoga, the other half see people praying. It is this reaction to what drones capture that worries him.
“Imagine if all we knew about the way people in Pakistan lead their lives were derived from images of the tops of their heads, taken from 15,000ft (4,500 metres) in the air. It’s bound to be full of uncertainty. Is this the best way to fight a war?”
The fact that there were few published photographs of US drone activity had been bothering Van Houtryve. Then, last summer, he was sent on assignment to Peru to photograph a mine. It was while trying to secure aerial shots that an engineer introduced him to the use of drones in photography; he soon earned enough to buy his own.
“When I first started looking, they were expensive and difficult to get hold of but they started popping up on Amazon for a more reasonable price,” he says. With the help of online forums and through “internet shopping for bits and bobs” from France, Hong Kong and the US, Van Houtryve modified his drone so that it could carry a high definition camera and transmit video back to his monitor on the ground. In total, the device cost him around $2,500 (£1,500).
3-D printers are typically used make high-resolution models or functional prototypes, but artist Shane Hope manipulates them to channel his inner Jackson Pollock. The Brooklyn-based artist creates “paintings” that are densely packed with a rainbow of 3-D printed barnacles. The results are massive, dazzling assemblages—beautiful in the way that spectacular computer glitches can be—and are only matched in manic energy by Hope’s descriptions of them. “Seeing 3-D printing as a sort of gateway drug en route toward molecular manufacturing, I thereafter decided I’d visually/literally relate the operative ideologies, promises, and hype of 3-D printing to the R&D and forecasts regarding nanofacture.” Heady stuff, and while this jargon-filled description is a tad grandiose, the paintings push the boundaries of low-cost 3-D printers in new and interesting ways.
Think art. What comes to mind? Maybe Picasso, Rodin, Dali.
Now think technology – and you’ll probably imagine a smartphone or a computer.
Throughout history, technology has provided artists with new tools for expression.
Today, these two seemingly distinct disciplines are interlinked more than ever, with technology being a fundamental force in the development and evolution of art.
All over the world, people are engineering our future. The internet, digital fabrication, nanotech, biotech, self-modification, augmented reality, virtual reality, “the singularity” – you name it, all of this is altering our lives and our view of the world and ourselves.
If you are an art lover, your life is weighed down by coffee table books. They stack up and sprawl out way beyond the intended realm. In my case, the coffee table itself rests on a mound of art books, which also mass in mountains all around. Why? Because picture books have traditionally been the only way to keep good reproductions of art with you. In recent years they have got even bigger, as publishers supersize their Michelangelo tomes.
So, the rise of online art resources is a liberation. I love the way great paintings are becoming increasingly accessible on the computer screen. One innovation is the new site artfinder, which offers you the chance to build your own gallery of favourites from a vast and presumably growing store of digital reproductions of great art. As always with these ventures, it is important to realise it is not and cannot be complete. I found at least one surprising gap: although it has 30 works by Watteau, the site does not include his masterpiece Gilles.
Shortened attention span. Less interest in reflection and introspection. Inability to engage in in-depth thought. Fragmented, distracted thinking.
The ways the Internet supposedly affects thought are as apocalyptic as they are speculative, since all the above are supported by anecdote, not empirical data. So it is refreshing to hear how 109 philosophers, neurobiologists, and other scholars answered, “How is the Internet changing the way you think?” That is the “annual question” at the online salon edge.org, where every year science impresario, author, and literary agent John Brockman poses a puzzler for his flock of scientists and other thinkers.
Although a number of contributors drivel on about, say, how much time they waste on e-mail, the most striking thing about the 50-plus answers is that scholars who study the mind and the brain, and who therefore seem best equipped to figure out how the Internet alters thought, shoot down the very idea. “The Internet hasn’t changed the way we think,” argues neuroscientist Joshua Greene of Harvard. It “has provided us with unprecedented access to information, but it hasn’t changed what [our brains] do with it.” Cognitive psychologist Steven Pinker of Harvard is also skeptical. “Electronic media aren’t going to revamp the brain’s mechanisms of information processing,” he writes. “Texters, surfers, and twitterers” have not trained their brains “to process multiple streams of novel information in parallel,” as is commonly asserted but refuted by research, and claims to the contrary “are propelled by … the pressure on pundits to announce that this or that ‘changes everything.’ ”
More than 400 files are now on iTunes U – a section of the online store which features educational content.
Projects include a series of films that use social networking site Twitter to bring the audience’s questions directly to artists like David Hockney.
There are also recent interviews with contemporary artists including Jeff Koons and Louise Bourgeois.
Clips of Turner Prize-winning artist Martin Creed and his band performing at the Tate Modern are featured alongside debates about his work.
Audio recordings of leading academics, teaching resources and multimedia guides for the latest Tate exhibitions will also be made available.
The Tate has four galleries – two in London, one in Liverpool and one in St Ives, in Cornwall.
Everyone has been talking about an article in The Atlantic magazine called “Is Google Making Us Stupid?” Some subset of that group has actually read the 4,175-word article, by Nicholas Carr.
To save you some time, I was going to give you a 100-word abridged version. But there are just too many distractions to read that much. So here is the 140-character Twitter version (Twitter is a hyperspeed form of blogging in which you write about your life in bursts of 140 characters or fewer, including spaces and punctuation marks):
Google makes deep reading impossible. Media changes. Our brains’ wiring changes too. Computers think for us, flattening our intelligence.
If you managed to wade through that, maybe you are thinking that Twitter, not Google, is the enemy of human intellectual progress.
With Twitter, people subscribe to your “tweets.” Those who can make life’s mundane details interesting garner a large audience. Several services have been created to compete with Twitter. Others have been started to help people manage the prodigious flow of information from Twitterers.
There is even a version, Yammer, for use inside companies. You follow the word bursts of particular employees. (“In the weekly staff meeting. Good bagels. Why is everyone wearing khakis? All staff must file their T.P.S. reports on time, O.K.?”) As if there weren’t already enough to distract us in the workplace between meetings, phone calls, instant messages, e-mail messages and those Google searches.
If people question the benefit of Google, which has largely liberated us from the time-wasting activities associated with finding information, there is outright hostility to a tool that condenses our lives into haiku. The co-founder of Twitter, Jack Dorsey, was asked by M.I.T.’s Technology Review magazine — in a tweet, of course — why when people who aren’t familiar with Twitter are told about it, they are “uncomprehending or angry.” His response was brief and unsatisfying: “People have to discover value for themselves. Especially w/ something as simple & subtle as Twitter. It’s what you make of it.”
It is hard to think of a technology that wasn’t feared when it was introduced. In his Atlantic article, Mr. Carr says that Socrates feared the impact that writing would have on man’s ability to think. The advent of the printing press summoned similar fears. It wouldn’t be the last time.
When Hewlett-Packard invented the HP-35, the first hand-held scientific calculator, in 1972, the device was banned from some engineering classrooms. Professors feared that engineers would use it as a crutch, that they would no longer understand the relationships that either penciled calculations or a slide rule somehow provided for proficient scientific thought.
But the HP-35 hardly stultified engineering skills. Instead, in the last 36 years those engineers have brought us iPods, cellphones, high-definition TV and, yes, Google and Twitter. It freed engineers from wasting time on mundane tasks so they could spend more time creating.
Many technological advances have that effect. Take tax software, for instance. The tedious job of filing a tax return no longer requires several evenings, but just a few hours. It gives us time for more productive activities.
But for all the new technologies that increase our productivity, there are others that demand more of our time. That is one of the dialectics of our era. With its maps and Internet access, the iPhone saves us time; with its downloadable games, we also carry a game machine in our pocket. The proportion of time-wasters to time-savers may only grow. In a knowledge-based society in which knowledge is free, attention becomes the valued commodity. Companies compete for eyeballs, that great metric born in the dot-com boom, and vie to create media that are sticky, another great term from this era. We are not paid for our attention span, but rewarded for it with yet more distractions and demands on our time.
The pessimistic assumption that new technologies will somehow make our lives worse may be a function of occupation or training. Paul Saffo, the futurist, says he could divide the technology world into two kinds of people: engineers and natural scientists. He says the world outlook of the engineer is by nature optimistic. Every problem can be solved if you have the right tools and enough time and you pose the correct questions. Other people, who can be just as scientific, see the natural order of the world in terms of entropy, decline and death.
Those people aren’t necessarily wrong. But the engineer’s point of view puts trust in human improvement. Certainly there have been moments when that thinking has gone horribly awry — atonal music or molecular gastronomy. But over the course of human history, writing, printing, computing and Googling have only made it easier to think and communicate.
New York Times