Here’s an interesting finding: It turns out that unmarried couples who live together are more likely to share the housework equally than married couples. That is, men in unmarried couples do more housework than married men, and women in unmarried couples do less housework than married women. Why? Possibly because, as the authors — Theodore Greenstein and Jennifer Gerteisen Marks of North Carolina State University — suggest, marriage is such a culturally powerful institution that men and women shift their views of themselves when they say “I do”.
As extra proof, Gerteisen points out that this shift occurs even in couples that have an “egalitarian” point of view — i.e. where they believe that men and women ought to share the work equally. When couples like this marry, the men still wind up doing less of the work. As Gerteisen says in a press release:
“Marriage as an institution seems to have a traditionalizing effect on couples — even couples who see men and women as equal,” says Davis.
You can read their full study online here as a PDF if you want. There’s a lot of fascinating data here, and it seems reasonably solid; the researchers polled 17,636 respondents in 28 nations. (Mind you, there are the usual problems with this sort of research — i.e. partners might be misreporting the amount of housework they do, either adjusting it up or down.)
Here’s one interesting finding buried towards the end: It turns out that in households where the women make a lot more the men, the men report doing more housework — but the women do not report their housework going down. Someone’s perceptions are off: Either the men or women are overestimating the housework they do in this situation. What’s going on? The authors suggest this curious result might be because of men and women have divergent attitudes towards the meaning of work and money.
Men, they hypothesize, “are more likely to see money as a way to ‘buy out’ of housework.” So in situations where the women makes a lot more money than they do, they see their partner as having “bought out” of housework; consequently, they themselves ought to be picking up the slack. In this situation, they’re more likely to feel okay about reporting that they do more housework, whether or not they’re actually doing it.
But the women, the scientists suggest, have a different view. They’re “more likely to view money as power within the relationship that is not as directly tied to hours of housework”: i.e. making more money, for them, means they’re storing up credit to be dispended in other parts of their relationship negotiations. They don’t see themselves as “buying their way out of work” — so they report doing the same amount of housework (again, whether or not their amount has actually gone down.)
At least, I think I’m reading the study correctly. Someone check it out and let me know if my interpretation of this is off. It’s really intriguing, either way.
Today, Wired News published my latest video-game column — in which I argue that scary games are now doing a better job than scary movies in carrying on the traditions of horror. The piece is online for free here, and a copy is below!
Gore Is Less: Videogames Make Better Horror Than Hollywood
by Clive ThompsonI’d only been playing BioShock for 15 minutes, and already I was trembling like a little girl.
It’s hard to disentangle what precisely was scaring the crap out of me. Maybe it was hearing the rumbling moans of a nearby Big Daddy, and realizing it was hunting for me. Maybe it was the way those filthy, genetically modified humans would pop out of nowhere, dressed, improbably, in Victorian clothes and creepy Eyes Wide Shut clown masks. Or maybe it was their weirdly garbled dialogue — how they’d shriek, “Get away from me!” while slashing at me with lead pipes.
The fact is, I like to be scared out of my wits. I’m one of those wimps who is easily spooked yet generally enjoys the sensation. So ever since I was a kid, I’ve loved good horror movies — I’d turn out the lights freak myself out with classics like Halloween, Friday the 13th or The Exorcist.
Yet here’s the thing: For several years now, I’ve found that my favorite horror experiences aren’t coming from movies any more. They’re coming from games.
This is a lovely bit of design: An ad for Guiness that consists of warped text on a beer coaster, which only becomes legible when the drink is poured. (Sorry for the overly-massive image, but you can only see the effect when the picture is big enough.)
(Thanks to Andrew Hearst for this one!)
When someone gives you an insincere apology — i.e. it’s pretty clear they’re not actually sorry, but they’re being forced by someone else to say so — how do you react?
Psychologists have long observed that, counterintuitively, people accept forced apologies as graciously as they accept genuine ones. Grade-school teachers frequently remark on this: They watch as a colleague drags a surly 5th grader over to deliver a clearly fake and insincere apology to another 5th grader — upon which the aggrieved party happily accepts the apology and skips away.
What’s going on? Those of us who witness such incidents are incredulous: We know that the apologies are fake. So why do the aggrieved parties accept them so readily?
Jane Risen and Thomas Gilovich, two psychologists at Cornell, recently staged five different experiments in which people insulted a study subject, and then were forced to apologize — either voluntarily and sincerely, or upon being forced to, i.e. insincerely. They found that, as they’d suspected, all the insulted parties were generally equally content with both the sincere and coerced apologies. They didn’t judge their insulter harshly. But other participants in the experiments who witnessed the incidents were unconvinced by the insincere apologies — and they did judge the insulter harshly.
The reason for this disparity, the scientists argue, is that people who are receiving the apology and those who are watching the exchange are in different social roles:
The target [of an insult] may be motivated to come across as a forgiving person and to restore the smoothness of the social interaction so that the audience does not look down on him or her … The situation for observers is different. If an observer excuses someone who offers an insincere apology, the observer may be seen as insufficiently empathetic to the victim. It may thus be in the interest of observers to respond differently to sincere and insincere apologies and thereby signal that they care about others.
In one sense, this is perfectly obvious stuff. But it intrigues me because the status of apologies is pretty charged in a number of realms right now. One is politics, where political figures are increasingly regarded as “weak” if they apologize for anything, or even admit they’ve ever done anything wrong. Another is health care, where studies have shown that doctors who apologize for bad outcomes are considerably less likely to get sued — but of course, apologizing for even the tiniest thing horrifies their attorneys, who worry that it’d be used for a malpractice suit.
The paper is online here in PDF form if you want to read it yourself.
(Thanks to Top 10 Sources for this one!)
This is both wonderfully practical and totally hilarious: A couple of Oregon scientists developed a technique that lets you take a teaspoon of water from a city’s sewer plant and detect which drugs the population is currently using. It’s based on a simple point: Every drug you take eventually comes out in your urine, and a community’s urine all goes in one direction — down the toilet.
They’ve only tested a few different cities, but the regional differences are intriguing:
One of the early results of the new study showed big differences in methamphetamine use city to city. One urban area with a gambling industry had meth levels more than five times higher than other cities. Yet methamphetamine levels were virtually non-existent in some smaller Midwestern locales, said Jennifer Field, the lead researcher and a professor of environmental toxicology at Oregon State.
The ingredient Americans consume and excrete the most was caffeine, Field said. [snip]
She said that one fairly affluent community scored low for illicit drugs except for cocaine. Cocaine and ecstasy tended to peak on weekends and drop on weekdays, she said, while methamphetamine and prescription drugs were steady throughout the week.
This is, of course, largely being viewed as a technique for urban-health analysis and crime prevention: By knowing which drugs are on the rise in a particular city, doctors and police can help prepare for the health implications, and try to combat them. But just imagine the more sordid uses of the information! Like the bragging rights these tests could give to urban decadents, or even travel guides. “New York City — highest per-capital use of injectable heroin in the nation!!”
(Thanks to Top 10 Sources for this one!)
I’m really not a morning person. But according to a new study, this tells you a lot about my personality: I’m more likely to be “creative”, “risk-taking”, “non-conformist” and “independent” than early risers.
This work came from the psychologist Juan Francisco Diaz-Morales, who recently decided to see if there were any regularities in the personality traits of early risers versus evening people. He took 360 undergraduates, ranked their relative sleep/wake habits, and then scaled them on the Millon Index of Personality Styles. According to the blog of the British Psychological Society, here’s what he found:
[Morning people] tend to be of a certain personality: they favour the tangible and concrete, they trust their experience and the observable over intuition and feelings; they have an attention to detail and a preference for logic. They are respectful of authority, care about social conventions and are rarely politically radical. [snip]
In contrast to morning types, evening people preferred the symbolic over the concrete, were creative and risk-taking, and tended to be non-conformist and independent.
Assuming this finding holds water, it’d have some pretty interesting implications for the workplace, eh? A smart company would organize its workday to optimize tasks based on which type of person is needed for the job — a logic-crunching task versus a blue-skying brainstorm — and when they’re likely to be at their best.
Indeed, I’ve long suspected that the 9 to 5 schedule is kind of suboptimal for productivity; it’s patently clear that different people shine at different times in the day. And you could argue that — for white-collar work, at least — the time-delimited bounds of the workday are more up for grabs now than they’ve ever been. Historically, one big reason we settled on the 9 to 5 timeslot is for purposes of industrial efficiency: We needed people to be at their desks for roughly the same time period so they could work together. But email, mobile phones, and digital documents obviate a lot of those old-skool practical considerations. A lot of the rationale for 9 to 5 worktimes is now practically a phantom-limb phenomenon in corporate culture.
(Alas, the full study is behind a paywall, so I couldn’t read it, but here’s the official link.)
Here’s a fascinating dispatch from the new world of reputation management. The New York Times is now apparently receiving one request a day from people who want the paper to remove an old article from their online archive — because the article contains incorrect or incomplete information that makes the person look bad, and it’s cropping up on Google.
It’s an incredibly fascinating and troubling issue. The Times, like most newspapers, often runs a news brief when someone gets in trouble — but doesn’t print a followup when they’re cleared of their allegations, because it seems less newsworthy. In the past, this caused the subjects a lot of heartache, of course. But it’s far worse now, because a prospective employer, business partner or spouse Googles the person and … whoops, there’s the original news item, in the #1 or #2 slot on Google, still uncorrected, decades later.
What’s the answer here? Clark Hoyt, the Times’ public editor (pictured above) tackled this one in his weekly column today, and discovered that the news editors at the paper are baffled about what to do. They could acquiesce and remove the articles, but they’d worry about where to draw the line; they don’t have enough resources to re-investigate every two-decade-old article that subjects complain about. (Someone could, of course, complain about a perfectly legitimate article in hopes of having it taken down.) And if they start changing or removing old articles, it could begin to erode the trust of those who use their archives for research: “What’s not here now, and why isn’t it here?”, they’d start to wonder.
The most interesting suggestion came towards the end of the column:
Viktor Mayer-Schönberger, an associate professor of public policy at Harvard’s John F. Kennedy School of Government, has a different answer to the problem: He thinks newspapers, including The Times, should program their archives to “forget” some information, just as humans do. Through the ages, humans have generally remembered the important stuff and forgotten the trivial, he said. The computer age has turned that upside down. Now, everything lasts forever, whether it is insignificant or important, ancient or recent, complete or overtaken by events.
Following Mayer-Schönberger’s logic, The Times could program some items, like news briefs, which generate a surprising number of the complaints, to expire, at least for wide public access, in a relatively short time. Articles of larger significance could be assigned longer lives, or last forever.
Mayer-Schönberger said his proposal is no different from what The Times used to do when it culled its clipping files of old items that no longer seemed useful. But what if something was thrown away that later turned out to be important? Meyer Berger, a legendary Times reporter, complained in the 1940s that files of Victorian-era murder cases had been tossed.
“That’s a risk you run,” Mayer-Schönberger said. “But we’ve dealt with that risk for eons.”
Programming a database to forget: I love it! This whole issue is another symptom of our increasingly weird digital world, where feats of memory that are superhuman — or inhuman, or both — are made possible via silicon. Last year, when I profiled Gordon Bell, the Microsoft researcher who’s trying to record every aspect of his daily activities in a “MyLifeBits” data, it raised a lot of deeply personal questions about the relative value of remembering versus forgetting. We humans rely on our faulty memories to make sense of the world, because remembering everything would drive us nuts; one definition of “wisdom” is “all the knowledge that’s left over after you’ve forgotten the less-important things you’ve ever learned”. But of course, having perfect recall can allow for new and deeply cool forms of knowledge: Google’s great at tying together strands of information I wasn’t even aware were connected until I hit “search”.
Has anyone ever tried to do what Mayer-Schönberger suggests — and model the act of forgetting in a database? In a way, you could argue that Google sort of already does this … insofar as any piece of data appears on page 57 of a search query is essentially forgotten from the overmind, because almost no-one will ever read it. By this logic, one of the best ways to try and get Google to “forget” you is to seed the Net with really high-quality pages about you, which Google will find ever more interesting, driving the undesirable stuff downwards. This is sort of what I’ve always argued people should do if they’re unhappy with their Google identity: Start blogging, because it’s a pretty sure-fire way of eventually dominating the #1 slot, if you work hard and become good at it. Even so, though, there are no guarantees that the old Times report about your unjust drunk-driving arrest won’t appear someone on the first page. There’s no silver bullet here.
This is an issue we’re going to hear and more about in the future, I predict.
A couple of months ago Wired asked me to start writing a monthly column for the magazine, in which I analyze interesting collisions between science, technology, and society. I’ve got a couple of these now to blog, so here’s the first one: A column about how energy companies are beginning to use “ambient information” to hack our behavior and get us to conserve electricity. You can read it at the Wired site here, or check out the archived copy below.
I also appeared on the radio show WNYC talking about the column — and you can hear the segment online here!
Desktop Orb Could Reform Energy Hogs
by Clive ThompsonMark Martinez couldn’t get Southern California Edison customers to conserve energy. As the utility’s manager of program development, he had tried alerting them when it was time to dial back electricity use on a hot day — he’d fire off automated phone calls, zap text messages, send emails. No dice.
Then he saw an Ambient Orb. It’s a groovy little ball that changes color in sync with incoming data — growing more purple, for example, as your email inbox fills up or as the chance of rain increases. Martinez realized he could use Orbs to signal changes in electrical rates, programming them to glow green when the grid was underused — and, thus, electricity cheaper — and red during peak hours when customers were paying more for power. He bought 120 of them, handed them out to customers, and sat back to see what would happen.
Within weeks, Orb users reduced their peak-period energy use by 40 percent. Why? Because, Martinez explains, the glowing sphere was less annoying and more persistent than a text alert. “It’s nonintrusive,” he says. “It has a relatively benign effect. But when you suddenly see your ball flashing red, you notice.”
This is insanely cool: A scientist hooked up some subjects to virtual-reality systems — and hacked their brains into having an out-of-body experience.
The experiments were based on a long-known trick called the “rubber hand illusion.” In this one, people hide one hand in their laps while looking at a rubber hand on the table in front of them. A researcher strokes the fake with a stick — while simultaneously stroking the real hand in precisely the same way. Pretty soon the subject begins to identify so strongly with the rubber hand that if you smash it with a hammer, the subject will freak out and “feel” the pain.
These next experiments went one step further. They involved scientists hooking up subjects with virtual-reality goggles that displayed a 3D copy of their own body, as seen from behind, in front of them. (Basically, if was as if they were standing behind themselves.) The scientists rubbed the back of the avatar with a stick while performing the same action on the real subject’s body. Voila: The subjects began to identify with the avatar so strongly that they felt they avatar was their real body — i.e. that they were floating incorporeally behind themselves.
This is, of course, precisely the sensation people report in out-of-body experiences. There’s a story by Sandra Blakeslee in today’s New York Times on the experiment, and I was surprised to learn that out-of-body experiences occur not just during near-death events and freaky meditation sessions, but sometimes during “extreme sports”. Why? Possibly because any situation that screws up your proprioception badly enough can produce a dislocation of your sense of self. As Blakeslee writes:
The research reveals that “the sense of having a body, of being in a bodily self,” is actually constructed from multiple sensory streams, said one expert on body and mind, Dr. Matthew M. Botvinick, an assistant professor of neuroscience at Princeton University.
Usually these sensory streams, which include vision, touch, balance and the sense of where one’s body is positioned in space, work together seamlessly, Dr. Botvinick said. But when the information coming from the sensory sources does not match up, the sense of being embodied as a whole comes apart.
The brain, which abhors ambiguity, then forces a decision that can, as the new experiments show, involve the sense of being in a different body.
This makes me wonder: Have any video-game players identified so strongly with their onscreen avatars that they have a slightly out-of-body experience?
Here’s one last delightful part to this story. The guy who conceived of the virtual-reality experiment? Here’s how he came up with the idea:
Last year, when Dr. Ehrsson was “a bored medical student at University College London,” he wondered, he said, “what would happen if you ‘took’ your eyes and moved them to a different part of a room.”
“Would you see yourself where your eyes were placed?” he said. “Or from where your body was placed?”
I love the way scientific breakthroughs often come from hallucinogenically odd daydreaming. Researchers let their minds drift into super weird directions and … boom! It’s like how Einstein imagined himself riding on the crest of a beam of light, tried to envision what the world would look like, and came up with the theory of relativity.
Many sports fans know that a short season leads to unfairness and chaos. The shorter the season of their favorite sport, the more likely it is that a comparatively weak team will ascend up the ladder — merely by luckily winning a few key games.
So a couple of physicists recently decided to calculate precisely how long the major-league baseball seasons would need to last to be genuinely fair. They began with this assumption: To truly control for random outcomes — for the slim chance that, in any given game, the lesser team will accidentally beat the better one — you’d need to play a total of games equal to the cube of the teams involved. With 16 National League teams, that’s 4096 games, and 2744 for the 14-team American League.
Of course, there ain’t no way anyone’s going to sit through that much baseball. So they decided to scale back the pursuit of perfection, and calculate how many games would result in a situation that was not perfect, but way more fair than the current system. Their number? A full 256 games — much more than the 162 each team plays in the current National League season. As they put it in a press release:
By adding a preliminary round to the season, and eliminating the weakest teams before regular league play begins, the physicists showed that the best team in the National League would be virtually guaranteed to be among the top two or three teams with the best records, even with a significantly reduced number of games. Although the very best team may not always end up in the lead, a preliminary round or two would at least ensure that the top teams aren’t eliminated from the playoffs through simple bad luck.
I confess I know so little about pro sports that I cannot even begin to figure out whether their assumptions hold water, but it seemed like a pretty fun little finding to me.
Last week, Wired News published my latest video-game column, and it centers on topic I’ve long obsessed about: The relationship between architecture and the design of a good game — or even the appreciation of a good game.
The piece is online here, and a copy is archived below:
The Subtle Pleasures of Building a Dungeon
by Clive ThompsonI’ve been stumping down this long, stony corridor for about five minutes, trying to reach a remote chamber where I’ll battle a dread knight. But it’s ponderously slow going, because there’s too many twisty nooks — which attract evil bats, so I’m forced to stop and fight every 20 seconds. So, like many RPG gamers, I start bitching: Who designed this place?
Ah, that’s the problem. I did.
I was playing Dungeon Maker: Hunting Ground on my PSP. It’s a fairly by-the-numbers role-playing game, with one twist: You create the dungeon yourself.
Basically, you build a small dungeon, then wander around in it in real time, killing any monsters that show up. You use the loot from your kills to build an even more tricked-out, phat dungeon — which attracts ever more lethal and profitable monsters. This allows you to build an even bigger dungeon, attracting yet more monsters, etc., etc. It is, if you can dig this, a recursive dungeon crawler.
Metacognition is the ability to be aware of your mental state — to think about thinking. Historically, psychologists have assumed it’s a pretty high-level process, which we can’t really do until age five or so. Researchers would test preschoolers on their ability to assess their own mental state, and find that the kids couldn’t.
But Simona Ghetti, assistant professor of psychology at UC Davis, recently wondered if the problem wasn’t simply in the experiments. She noticed that most tests of metacognition asked the participants to use words to describe their internal states — which, she theorized, is why little kids couldn’t do it very well. The barrier was linguistic, not cognitive. So she devised a metacognition test that asked preschoolers instead to point to pictures to illustrate their internal state. Ghetti would pose the kids a question, and ask them to point to a picture of a confident-looking child if they were sure of the answer, or a doubtful-looking child if they weren’t sure.
Bingo: The preschoolers apparently had no trouble identifying their internal state. As Ghetti put it:
The tests showed that young children are aware of their uncertainty in the moment. Even 3-year-olds pointed to the confident face when they correctly identified, for example, a drawing of a monkey that had some features removed to make it harder to recognize. They pointed to the doubtful face if they could not come up with a correct answer.
“Even 3-year-olds are more confident when they’re right than when they’re wrong,” Ghetti said.
This experiment, of course, comes on the interesting work of the neuroscientist Jonathan Crystal, who this spring argued that even rats demonstrate metacognition. Again, his trick was simply in devising a more-clever way of scrutinizing rat thought processes.
Granted, either or both of these studies could prove to be flawed or ultimately misleading. But in general, they demonstrate the part of the scientific process that I always really love — the ability of a new experimental protocol to unveil new information about the world. I write a lot about scientists, and I’ve grown to hugely admire the ones who are really good at devising elegant new experimental techniques.
How do whales sleep? It’s always been difficult to tell, because we can’t easily observe their daily habits. But back in the late 90s, a female gray whale was rescued at sea and esconced at Sea World in San Diego, where a couple of scientists recorded its wake/sleep behavior for nine days solid. They wrote a paper with their observations in 1990 (PDF here).
The results? Well, it turns out that a busy day of sieve-feeding benthic crustaceans really knocks you out. The whale slept about 40% of the day, or about 9.5 hours. Also, the whale was diurnal, sleeping, like us, mostly at night.
Cool enough. But given those multiton brains they’re carrying around, the really big question is: Do whales dream? The scientists recorded eye movement and neck-and-body jerks that suggested that indeed, “paradoxical sleep” — REM — might be going on. As they wrote …
… we think that the presence of jerks during rest in the gray whale, taken together with our previous data on three species of dolphins, allows us to suggest that short episodes of PS do exist in Cetaceans in a modified form that is not accompanied by the classical polygraphical or behavioral signs of PS observed in most terrestrial mammals.
So, having duly cited the literature, we are now free to engage in the deliriously unscientific pastime of wondering: What in god’s name are whales dreaming about? The underwater scenery? Prime numbers? The telepathic messages they’re receiving from Alpha Centuri?
My favorite part of the paper is the diagrams showing the posture of the whale during sleep. Apparently she either floated slightly below the surface of the water, or chillaxed on the floor of the tank. Since this wasn’t an in-the-wild observation, of course, it doesn’t tell us whether or not whales would behave the same way in the briny deep, but perhaps future studies will explore this.
(Thanks to Science Blogs for this one!)
When I moved to New York nine years ago, my environment suddenly became much less healthy. On top of the nasty pollution, stress and overwork, my exercise evaporated: I had to give up my lifelong habit of daily cycling because the traffic is too psychotic. I figured I was probably shaving about five years off my life by moving here.
Whoops. It turns out that the Big Apple is actually good for you — because New Yorkers now live longer than the American average, and what’s more, life expectancy is rising faster here than in most of the rest of the US.
Why? That’s what I tried to figure out in a story I published last week in New York magazine. It’s online free, a copy is permanently archived below!
Why New Yorkers Last Longer
This city, once known as a capital of vice and self-destruction, is now a capital of longevity. What happened?
By Clive ThompsonLast winter, the New York City Department of Health released figures that told a surprising story: New Yorkers are living longer than ever, and longer than most people in the country. A New Yorker born in 2004 can now expect to live 78.6 years, nine months longer than the average American will. What’s more, our life expectancy is increasing at a rate faster than that of most of the rest of the country. Since 1990, the average American has added only about two and a half years to his life, while we in New York have added 6.2 years to ours. In the year 2004 alone, our life expectancy shot up by five months — a stunning leap, because American life spans normally increase by only a month or two each year. When these figures came out, urban-health experts were impressed and slightly dazed. It turns out the conventional wisdom is wrong: The city, it seems, won’t kill you. Quite the opposite. Not only are we the safest big city in America, but we are, by this measure at least, the healthiest.
One of the things I loved about early 80s video games is how incredibly weird they were. Half-eaten yellow pizzas being chased around by ghosts? Trampoline-enabled police mice pursued by cats? A plumber, hunting and killing the ape who stole his princess girlfriend? Ahem.
So for years, I always wished some game designer would just rip the lid off and finally make a game that was straightforwardly surrealistic — where cause and effect had only a very inscrutable relationship to one another. Like maybe the control scheme keeps switching unpredictably, or your character transforms for no good reason at random intervals.
Le voila. Today I happened upon game, game, and again game, a superstrange creation by Jason Nelson, and it pretty much satisfies all my criteria. The game, as Nelson describes it, is …
… a digital poem/game/net artwork hybrid of sorts. There are 13 curious levels filled with poetics, hand drawn creatures, scribbles, backgrounds and other poorly made bigts. The theme (cringe) hovers around our many failed/error filled/compelling belief systems, from consumerism to monotheism.
Gameplaywise, it involves you piloting a small blob around various delightfully pen-scribbled scenes. You’ve got goals … sort of. And destinations … sort of. When you bump into things, it does … uh … something, including triggering trippy sound samples, text boxes, transformations of the screen, and archaic pop-up home video. Oddly mesmerizing!
In the spring, Wired asked me to visit Bungie — the company that makes the insanely popular Halo series of video games — and report on the making of Halo 3, their latest sequel, due out next month. The story is in the current issue, and it focuses mostly on the company’s crazily data-oriented mechanism for testing the game to see how fun and playable it is.
The full piece is currently online at Wired’s site, and a permanent archived copy is below, but hey! Why not rush out right now and buy the print copy too?
Halo 3: How Microsoft Labs Invented a New Science of Play
3,000 hours of testing, a one-way mirror, and a team of designers obsessed with finding the golden mean of play
by Clive ThompsonSitting in an office chair and frowning slightly, Randy Pagulayan peers through a one-way mirror. The scene on the other side looks like the game room in a typical suburban house: There’s a large flat-panel TV hooked up to an Xbox 360, and a 34-year-old woman is sprawled in a comfy chair, blasting away at huge Sasquatchian aliens. It’s June, and the woman is among the luckier geeks on the planet. She’s playing Halo 3, the latest sequel to one of the most innovative and beloved videogames of all time, months before its September 25 release.
The designers at Bungie Studios, creators of the Halo series, have been tweaking this installment for the past three years. Now it’s crunch time, and they need to know: Does Halo 3 rock?
“Is the game fun?” whispers Pagulayan, a compact Filipino man with a long goatee and architect-chic glasses, as we watch the player in the adjacent room. “Do people enjoy it, do they get a sense of speed and purpose?” To answer these questions, Pagulayan runs a testing lab for Bungie that looks more like a psychological research institute than a game studio. The room we’re monitoring is wired with video cameras that Pagulayan can swivel around to record the player’s expressions or see which buttons they’re pressing on the controller. Every moment of onscreen action is being digitally recorded.
Two weeks ago, we got drowned here in New York when a flash storm dumped three inches of rain on the city. It doesn’t sound like much, but considering that about 13 million gallons of water flood into the subway on a completely bone-dry sunny day, the additional gallons totally b0rked the system — and the trains ground to a halt, which meant New York ground to a halt.
So I was intrigued to happen upon a recent study claiming that cities actually increase the intensity of storms. Two Princeton engineers gathered a boatload of data about a humongous storm that slammed Baltimore in July 2004, looking at lightning strikes, rainfall, clouds and aerosols. Their conclusion? The structure of the city exacerbated the storm — producing 30 per cent more rainfall than had the storm passed over a piece of nearby non-urban countryside. As they noted in a press release:
Much of the lightning during the 2004 storm wrapped around the western edges of Baltimore and Washington, D.C., to the south. “It’s as if all of a sudden the lightning can ‘feel’ the city.”
Sentient thunderstorms. I love it. Run for your lives!!
Seriously, though, they hypothesize that there’s a bouquet of urban-design-related vectors at play here, including the “urban heat island effect”, which adds energy to a thunderstorm, as well as tall buildings that increase wind drag and provide “boiling action” that boosts rainfall. Pollution, they think, might also increase the yield.
Well, this blog, anyway. Let’s see, my last post was on — what — May 9? Or more precisely, Stardate -316352.05, according to this converter.
Things have been, ahem, a tad hectic in my work life in the last three months, hence the radio silence. However, I have since repowered the shields and located a fresh dilithium crystal on Ebay, so it’s all good now.
I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).
ECHO
Erik Weissengruber
Vespaboy
Terri Senft
Tom Igoe
El Rey Del Art
Morgan Noel
Maura Johnston
Cori Eckert
Heather Gold
Andrew Hearst
Chris Allbritton
Bret Dawson
Michele Tepper
Sharyn November
Gail Jaitin
Barnaby Marshall
Frankly, I'd Rather Not
The Shifted Librarian
Ryan Bigge
Nick Denton
Howard Sherman's Nuggets
Serial Deviant
Ellen McDermott
Jeff Liu
Marc Kelsey
Chris Shieh
Iron Monkey
Diversions
Rob Toole
Donut Rock City
Ross Judson
Idle Words
J-Walk Blog
The Antic Muse
Tribblescape
Little Things
Jeff Heer
Abstract Dynamics
Snark Market
Plastic Bag
Sensory Impact
Incoming Signals
MemeFirst
MemoryCard
Majikthise
Ludonauts
Boing Boing
Slashdot
Atrios
Smart Mobs
Plastic
Ludology.org
The Feature
Gizmodo
game girl
Mindjack
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
ECHO
SciTech Daily
Arts and Letters Daily
Textually.org
BlogPulse
Robots.net
Alan Reiter's Wireless Data Weblog
Brad DeLong
Viral Marketing Blog
Gameblogs
Slashdot Games