Sorry that I’ve not been updating this week — I’m in Las Vegas at the Black Hat Security Briefings Conference. I’ll post some of the fun stuff I encounter when I get back tomorrow.
Interestingly, this is the first time I’ve ever been in Vegas, and because I’m trying to do this on the cheap, I’m staying in a crappy hotel in the low-rent area of Vegas. It’s quite amazing how deeply screwed-up are the poor ‘hoods in this city. It would be banal to point out the obvious ironies — i.e. that there’s a Niagara Falls of cash thundering in the casinos only a few miles away, even as the local crack addicts get into tooth-loosening brawls over who gets the last can of Lysol. But what struck me more is the weird mix of hopefulness and total despair that haunts the streets of decrepit Vegas: Everyone seems balanced between a) striving incredibly hard for some big score just around the corner, and b) just giving the hell up once and for all, with neither a) nor b) dominant.
These are my massive cultural generalizations, of course, and probably have all the accuracy of those of a Victorian anthropologist. But it’s a weird, weird part of town.
Apparently, Samuel Waksal — the ImClone Systems CEO who was busted for insider trading — didn’t get to go to the jail of his choice. He asked to go to Elgin, a federal “prisoner camp” in Florida so cushy that Forbes magazine dubs it “Club Fed”. Instead, as Forbes reports, he’ll pack off to the Schuylkill Federal Correctional Institute in Minersville, Pa.:
The biggest downside of Schuylkill: Unlike other freestanding, low-security camps, Schuylkill is attached to a more menacing, medium-security facility. While the prison camps where white-collar types usually end up look like college campuses, medium-security prisons look like, well, prisons. Inmates at the low-security camp are often required to work in the medium-security buildings, surrounded by guards, barbed wire and more serious offenders. There, the former ImClone Systems (nasdaq: IMCL - news - people ) chief executive is likely to be washing dishes and sweeping floors for pennies an hour.
The big debate is, of course, whether jailtime actually prevents corporate malfeasance. Some say it doesn’t; these guys are so in love with their power that they could never imagine it’ll happen to them. One interesting solution? A professor at the University of Maryland’s MBA program sends his students to prison for a “scared straight” session — where they get to feel what it’s like living behind bars, and talk to CEO jailbirds.
How’s this for irony? Global warming has gotten so bad in Alaska that it is no longer possible for oil companies to go up there and drill for more oil. According to a story in the Houston Chronicle:
A state rule says heavy exploration equipment can be used on fragile tundra only when the ground is frozen to 12 inches deep and covered by at least 6 inches of snow.
However, because winters in the Arctic are becoming shorter, the number of days the tundra meets those conditions has shrunk from more than 200 in 1970 to only 103 last year, a state document notes.
I love this stuff. According to the BBC, a “giant robotic balloon escaped from a science centre in South Yorkshire”. It was being transported from one building to another when a freak gust of wind knocked it loose from its handles.
The thing is, the “flyborg” will be quite hard to catch — because it has a collision-detection system to help it avoid obstacles. It’ll actually pull defensive manouevers to avoid being taken in. And since it can remain airborne for a week before deflating, the company that built it has had to alert air-traffic officials to warn aircraft about an artificial-intelligence craft roaming the country’s skies.
But you know the really weird thing? This “robot escape” stuff is happening more often. As we build ever more devices that are self-piloting, more of ‘em are busting loose and hitting the open road. Last fall, a British professor was working on a self-piloting drone, when he turned around to discover it was missing. As The Age reported:
Professor Noel Sharkey said he turned his back on the drone and returned 15 minutes later to find it had forced its way out of the small make-shift paddock it was being kept in.
He later found it had travelled down an access slope, through the front door of the centre and was eventually discovered at the main entrance to the car park when a visitor nearly flattened it with his car.
Jesus. At this rate, Skynet will be go live, like, any month now. We’re doomed.
(Thanks to Slashdot for this one!)
Last Wednesday, I went to see “Game Engine” — a selection of innovative video-game movies that were screened as part of the New York Video Festival. Much of the evening was devoted to exploring an interesting question: Where do films and video games begin to merge?
What fascinated me most was the reaction of the filmmakers in the audience. These are guys who, by and large, don’t play video games. If they’ve ever thought about games, it’s because they’re fascinated by the increasing “realism” of 3D graphics. They think that if games are getting better, it’s because they’re becoming more like movies — more able to render lifelike human characters with realistic facial expressions. These filmmakers are not alone. Indeed, this basic concept — that games get better the more they resemble movies — is the dominant way that mainstream cultural critics think about games.
But as I’ve argued before, this completely misunderstands the nature of games, and the nature of narrative. Games are not some poor cousin of movies, desperately attempting to become photorealistic enough that they can have convincing, dramatic scenes. Game are about creating systems that you play in. They’re about establishing a situation with a few basic rules, and then turning players loose to see what they’ll do. That’s why games are often most impressive when they’re acting not as dramatic wannabe movies, but as little physics simulations: What happens when you blow that thing up? Or when you jump off that ledge? Or when you drive that tank at the wall at 200 miles an hour? Or there are games like The Sims, which create simulations of what I call “emotional physics”: They render hypothetical social situations. (Hey, what would happen if a family had a domineering rock-star kid, a passive model-train collecting father, and a heroin-abusing crackhead mother? Let’s find out!)
The point is, the pleasure in these games is not even vaguely like the pleasure of narrative. They’re inherently interactive. Narrative inherently isn’t. Narrative is about surrendering yourself to the author’s will; as Northrop Frye once pointed out, the fun is not in crafting the story yourself, but in asking “yeah? And then what happened? And then? And then?” It’s about not being in control. Narrative is, at heart, a rather masochistic pleasure. A good narrative drags you along almost against your will, which is precisely why we describe a really good novel as a “page-turner”. It grabs us and forces us to keep going, keep reading, for hours and hours, long after we know we should turn off the light and go to bed. That’s the pleasure of narrative — and while it’s wonderful, it’s not even vaguely interactive.
So it’s never been a surprise to me that video games are at their absolute worst when they they most desperately attempt to include traditional narrative. At the New York Festival showing, the audience sat through a couple of the narrative “cut scenes” from gangster games like The Getaway, or shooter games like Half Life. In each case, the game-makers had crafted little dramatic scenes where you stop playing for a second, and just watch the characters deliver prescripted lines. And uniformly, these scenes were simply awful. The scripts were pretty leaden, and the characters looked static and inert. And the crowd of filmmakers noticed it. You could see them looking around at each other, going, what the hell? This is crap. This is shittiest filmmaking I’ve ever seen in my life.
But then something else came on the screen, and completely blew them away. It was a short video done by a fan of the snowboarding game SSX. The gamer had pretty much mastered the game, and perfected the most incredibly cool moves imaginable — really hilarious stuff that simply wouldn’t be possible in real life, like having a snowboarder jump off the board, spin around through the air like a ballet dancer on point, and then grab the board again in time to hit the powder at 100 miles an hour. The gamer recorded hundreds of shots of these moves, and stitched the best ones together into a video set to the Evanescence tune “Bring Me To Life”. (You can download it and view it here yourself!)
The filmmakers loved it. They went berserk. Because here, finally, was something genuinely innovative. The insane camera angles in the game — soaring through the air alongside the pirouetting snowboarder, zooming in and out in the blink of an eye — would be physically impossible in the real world. Indeed, these moves are what’s most game-like about games: The ability to generate fantastic new physics, environments, and worlds. By using the virtual world inside the game, the gamer had produced virtual “camera work” that was crazily cutting-edge … stuff that the real-world filmmakers would never dream of trying. (In fact, they probably couldn’t try it even if they wanted to, because such camera work in the real world would be prohibitively expensive. But inside a virtual world, anything goes.) When you watch that SSX movie, you realize that closest cousin to the modern video game is not the movie but the music video. Music videos are another genre that relies heavily on hallucinogenic, Dali-esque video tricks — and, incidentally, does not rely on narrative very much at all.
To me, it’s more proof that game-makers should abandon their awful and incredibly boring attempts to put movie-like elements in their games, and focus on what makes games, well, games.
There’s an incredibly interesting story at CNN.com about a hacker who installed keystroke-sniffing software on Internet terminals at several Kinko’s stores in New York. He captured over 450 passwords and used them to access and open bank accounts online. But the really wacky thing was how he was caught:
Jiang was caught when, according to court records, he used one of the stolen passwords to access a computer with GoToMyPC software, which lets individuals access their own computers from elsewhere.
The GoToMyPC subscriber was home at the time and suddenly saw the cursor on his computer move around and files open as if by themselves. He then saw an account being opened in his name at an online payment transfer service.
God, that’s so cinematically beautiful it’s almost scripted.
(Thanks to Slashdot for this one!)
Sending text messages on phones is now so popular in China that it’s given rise to an entirely new form of repetitive stress injury: Sore thumbs. From a story in the China Daily:
Wang Heping, a senior doctor with Renmin Hospital of Northwest China’s Gansu Province, said: “I have diagnosed four cases of tenosynovitis this year, and I believe there are more people who haven’t seen a doctor.”
Thumbs might be hurt by pressing the phone keypad too much, too quickly and in a very small area, Wang said.
“To keep your thumbs healthy, the best way is to write less messages and do more physical exercise,” he said.
(Thanks to Techdirt Wireless News for this one!)
So, Dockers has been heavily promoting its new Go Khakis — which include nanoengineered materials that make ‘em particularly resistant to stains. It’s really just a Teflon coating, apparently, but since the company has been throwing around the word “nanotechnology” so much, the wits at Popular Science decided to call up the corporation and find out if there actually knew what the heck nanotechnology is. The entire transcript is online, but here’s a taste:
Popular Science: Can you explain what makes this nanotechnology rather than just a coating? What is nanotechnology?
D: One moment please. Did you get the pleated or flat-front?
PS: Flat-front.
D: OK, one moment please. Because the one that says nanotechnology is the versatile pant that wicks moisture away from you …
PS: I still don’t understand. Are there microscopic machines repelling the stain? How does it work?
D: Umm … I guess it’s the type of fabric that makes it the nano.
PS: So the “nano” has more to do with the size of the fibers? And water is small enough to get through for washing, but other liquids are not—they bead up and roll off?
D: You know, I’m really not sure, but I do know they’ll come clean. My kid has a pair of these. Messy kid. So I got the shirt and pants, and he’s doing great with them. You just need to remember to press after every fifth wash.
It’s an old, nasty trick — calling up the poor info-line workers to hassle them about arcane details of their products. But you know, finer entertainment cannot be had.
Normally, I use America Online’s Instant Messenger. But this Friday? I’m switching to Microsoft Messenger.
Why? Because Microsoft is giving away cash to random Messenger users! As CNN.com reports:
MSN, which is expected to release the finalized version of its MSN Messenger 6.0 program on Thursday, will run a promotion offering $1,000 in cash to 10 users each Friday, starting July 25. The lottery will award $1,000 an hour from 9 a.m. to 7 p.m. PT to randomly selected users. The weekly $10,000 giveaways will run for four Fridays, ending August 15.
Now this is what I call advertising: Giving away buckets of cold, hard cash. It’s a fine, time-honored technique, pioneered back in the 50s when Crazy Eddie used to hand you 50 bucks just for taking an Edsel for a spin. Not that Microsoft Messenger is, like, an Edsel or anything. Uh. Yeah.
Anyway, as of Friday, anyone online on MSN Messenger can find me by pinging “collisiondetection99”.
(Thanks to Slashdot for finding this one!)
Security freaks tell you that you should always pick a complex, non-intuitive password — a string of gibberish like “xyk95woi”. Most people don’t do this. One day, I asked everyone I knew how they’d developed their email passwords. Sure enough, more than half were just using their own last name — or their birthdate or their cat’s name, or something equally as guessable. This is because of a simple human fact: People have trouble remembering long strings of gibberish. They need some sort of mnemonic.
So a couple of Microsoft researchers figured out a funky new technique for generating — and remembering — complex, weird passwords. They present you with a string of inkblots, like the one above. You figure out what each one looks like to you; then you use the first and last letter of each to generate a password — one that is very gibberish-like indeed. For example, if you saw inkblots that looked like a “fly”, a “helicopter”, a “lung” and a “fish”, you’d have “fyhrlgfh” as your password. When you want to log into your email but you’ve forgotten your password, the software simply shows you the exact same bunch of inkblots — and you remember the words you thought of.
The thing is, this system is almost completely uncrackable. Why? Because of a another quirk of human cognition: No two people ever think an inkblot looks like the same thing. As a Microsoft report on this notes:
Stubblefield and Simon found out that once we’ve identified the inkblot we see it the same way every time. And even though people sometimes see similar things in inkblots, they describe it in different ways. For instance, almost all the users in their study identified the inkblot below as some type of flying person. But the users described their flying person differently, such as ‘evil flying henchman’ or ‘flying gardener.’
Mind you, this is also an insanely complicated system — and as security people will tell you, any security system that’s too complex will be abandoned by its users. They’ll go back to using their cat’s name as a password.
But no, in case you’re wondering — my email isn’t “Smokey”.
(NOTE: There is a totally killer discussion of the psychology of passwords taking place in the discussion thread on this topic. Go read it now!)
(Thanks to Slashdot for this one!)
I honestly can’t imagine a headline I’ve written more likely to get people reading.
Nonetheless, this is an actual proto-meme. Last week, MSNBC published the following article:
A paintball manufacturer, advocates for women and the mayor of Las Vegas are expressing outrage that a Las Vegas company claims to be charging men up to $10,000 to use the non-lethal but dangerous weapons to shoot naked women racing through the sagebrush. But a creator of the “Hunting for Bambi” game defended the enterprise as good, clean fun for “guys who thought they had done everything.”
Ahem. As you might expect, Las Vegas mayor Oscar Goodman was immediately up in arms. “As soon as I found out about this, I called for an investigation,” Goodman reportedly said. “Las Vegas is a place where anything goes, but this crosses the line if this is real.”
Ah, but is it real? According to an investigation done by the Urban Legends Reference Page — a site that, given the profusion of Internet rumors, may now be more culturally important than the New York Times — Hunting For Bambi is not actually offering this as an ongoing service. No, they’re just selling $20 tapes of some women getting splattered with paintballs — which plants them squarely in the increasingly seedy territory of Xtreme-reality-staged-to-look-as-if-it’s-gone-wild entertainment. One clue that we’d stepped past this Rubicon might have been the florid promotional copy on the Hunting For Bambi web site:
More shocking than anything you’ve ever seen before. Labeled by CBS News as a cross between Sex and Violence a deadly combination! Women are being hunted down like animals and shot with paintball guns. This Raw and completely Uncensored video is a cross between Bum Fights and Girls Gone Wild and is sure to be the topic of many Howard Stern Show fans. You will be completely stunned when you see some of the wildest, most outrageous moments ever caught on tape.
You can almost see the Rebus-like math at work here: Take Bum Fights, multiply it by Girls Gone Wild, raise it to the Howard Stern exponent and … presto! Shooting topless chicks with paintballs. Hell, we could have done the calculations ourselves and seen this coming. I predict that by the summers’ end we’ll have added the next few variables, and be swamped with videos of barely-legal Swedish girl scouts being shot by rocket onto the International Space Station and forced to play strip poker. By robots.
(Many thanks to Erik at CultureRaven for pointing this one out!)
Remember the movie Footloose? My apologies to anyone in advance who’d managed to forget it, but I need you to reactive those brain cells right now. Because I just stumbled upon something incredibly cool — a site that indexes examples of game theory as represented in popular culture: Movies, television, books and music.
Who knew that by watching Footloose, we were getting the following lesson:
A game of chicken on a narrow road, with ditches on each side for effect, and riding tractors. A stuck shoe lace demonstrates commitment.
Lately, I’ve been debating space-flight video games with my friends. One of the complained that games like Rogue Squadron — an incredibly cool Star Wars-themed game — is weirdly unrealistic. “The ships are in outer space, but they fly like planes,” he argued. For example, when you try to turn, your ship has a big turning radius — even though it’s not pushing against any air. A real space-ship could just instantly rotate itself in any direction it wanted and shove off.
I argued with him, posing the alternative question: Who wants real physics? The whole point of a game is to present you with a version of reality that’s as stylized as a sonnet. In fact, if the game actually represented the realistic physics of the cosmic void, controlling those X-wings and TIE Fighters would be a total nightmare. Space physics are incredibly weird.
That’s what NASA found back in the 60s, when it first tried to do something you’d expect would be quite simple: To dock two spacecraft together. There’s a superb story in last year’s Invention and Technology magazine on this. NASA had a Gemini capsule fly up to an empty rocket casing that was floating in order, and try to dock with it from behind. But, as the astronaut in control found, the physics quickly became awfully strange:
… simply pointing the craft toward the target and firing his rear attitude thrusters did not help him overtake the spent stage, as one would expect. Instead, the distance between them actually increased, as if he were pressing a car’s gas pedal in forward gear and the car was moving in reverse.
This was where Newtonian mechanics kicked in. By increasing his craft’s speed, he had increased its distance from the earth. In this new, higher orbit, the craft’s linear velocity, measured in miles per hour, was greater than before. But its angular velocity—the rate at which it was traveling around the earth, measured in revolutions per hour—was lower. As Kepler had pointed out, objects in low orbits will complete an orbit around the earth faster than those in high orbits, even though their linear velocity is lower.
Thus, by speeding up his spacecraft, McDivitt had made it circle the earth more slowly than the craft he was trying to catch up with. Mission Control had to call off the rendezvous, since McDivitt was using too much fuel. NASA engineers and astronauts extracted a valuable lesson from this mission: It was difficult, if not impossible, to steer a spacecraft merely by eye. The orbital dynamics are so counterintuitive that—combined with the lack of references for judging distances—no human could do the job without help from electronic sensors.
All of which goes to show: Realism is a huge pain in the ass. I’ve never entirely understood people who assume that “realism” is the tautological goal of video games; the more realistic they are, the more fun they’ll be. Quite the contrary! Sure, the Y-wing fighters in these games are as easy to pilot as a Camaro. But I wouldn’t have it any other way. I get enough reality sitting here at my bloody desk, thank you very much.
I realize that Liz Phair has, recently, been subjected to some pretty nasty criticism over her new album. A few weeks ago, Meghan O’Rourke wrote a piece in the New York Times claiming that Phair had “committed an embarassing form of career suicide” with her new attempt to transform herself into a pop princess — as in the rock-chick-moue pose above.
Anyway, Phair went completely batshit about the review and wrote a letter to the New York Times that is just spectacularly unhinged. It’s online here, but in part:
Once upon a time there was a writer named Chicken Little. Chicken Little worked very hard and took her job very seriously. Often, she even wrote. One day, just as Chicken Little was about to have an idea, she heard something falling on her roof. “The sky is falling! The sky is falling!” she shrieked, spilling green tea and vodka all over her work station. This commotion awoke her three readers, who lived with her in her hut, and all three rushed outside to see what had happened to the sky. After enduring several anxious minutes alone, Chicken Little was relieved to see her readers return. “Oh, Chicken Little, it was just the trees dropping their buds on a beautiful spring day,” they said. Chicken Little tried not to show her disappointment.
It actually gets weirder than that. Read the whole thing.
I love it. The Canadian Travel Commission decided to make some new travel guides — highlighting such national marvels as the new, Inuit-controlled territory Nunavut, and the country’s famously gorgeous maritime provinces.
So the government hired Fodor’s to produce maps. When the maps came in, as the National Post recently reported, there were a few problems:
Among the errors: Nunavut is spelled as “Nunavit,” the cities of Fredericton and Halifax are absent, and Prince Edward Island and Yukon are omitted entirely. The province of Newfoundland and Labrador is also misidentified — the official name of the province was changed to include Labrador in 2001 …
“This is not our finest hour,” said Stuart Applebaum, a spokesman for Random House Inc., the publisher of the popular guides. “We’re very sorry about the errors and we’re making every effort to correct them as quickly as possible.”
University of Wisconsin professor James Paul Gee thinks we ought to have kids playing video games by age three — to make sure they get a good education. In his new book What Videogames Have To Teach Us About Learning And Literacy, he identifies 36 “good learning principles” built into video games. There’s a great interview with him online now at Gamezone:
In my view—and I know it is controversial—kids should be playing games from early on, three-years-old, say. They should start with computer games like Dr. Seuss’s Cat in the Hat, Winnie the Pooh, Pajama Sam, and Spy Fox, initially playing these games with the parent. They can then move on to games like Pikmin, Animal Crossing, Zelda, and Age of Mythology. But there is a proviso here. Parents must ensure that kids play games proactively, that is, that they think about the design of the game, the types of thinking and strategies it recruits, its relationship to other games, books, movies, and the world around them.
And as for the violence-in-video-games debate:
I haven’t played every violent game, but I like Grand Theft Auto III (though a lot of the violence I did was to myself driving) and love Mafia. As I said above, there are two ways to play a game, you can play proactively and strategically or just become a good button-masher. If you want to be strategic—both in terms of the decisions you make and the ways you solve problems—Grand Theft Auto III is subtle and amazing. I found the gang fights distasteful, so I just didn’t trigger them. I went out of my way to see how little damage I could do while still earning my living through crime. Such choices make the game partly mine and not just the designer’s. Games allow you to accept a given assumption (I have to earn a living through crime) and then see how you personally would think, feel, and act.
Plastic had a headline for this item that made me laugh: “No, dear — no homework until you finish this level of Tomb Raider.” There’s a good debate going on there right now about this interview!
Okay, forget about chimps, rhesus monkeys, or the great apes. Clearly humanity has evolved not from primates, but from crows. In a paper in Science last year, a few scientists observed a crow using a piece of wire to retreive food from a flask. There’s a totally mind-bending video of it here, and the description is thus:
In the experiments, a captive female crow, confronted with a task that required a curved tool (retrieving a food-containing bucket from a vertical pipe), spontaneously bent a piece of straight wire into a hooked shape — and then repeated the behavior in nine out of ten subsequent trials. Though these crows are known to employ tools in the wild using natural materials, this bird had no prior training with the use of pliant materials such as wire — a fact that makes its apparently spontaneous, highly specific problem-solving all the more interesting, and raises intriguing questions about the evolutionary preconditions for complex cognition. The crow’s behavior was captured on an unusual video clip, available on Science Online.
Christ, it’s a good thing they didn’t leave a Palm Pilot lying around that room. That crow’d probably have venture-capital financing lined up for a company by now.
(Thanks to Boing Boing for this one!)
It’s made of tinfoil and wire, it has no moving parts — yet when you shove 20,000 volts into a “Lifter”, it levitates and hovers like a UFO. Check out the current issue of Wired, where I wrote a feature about the mystery as to how Lifters fly — which took me all the way to NASA down in Alabama, where scientists are experimenting with this weird new technology. The story is online here, but if you buy the issue of Wired itself, it comes with a funky graphical how-to guide on building your own Lifter!
Remember that weird blob of marine flesh that washed up on the shores of Chile a few weeks back? I blogged about it, and noted that oceanographers were totally puzzled as to what it was — and theorized that it might even be a new species.
Nope. Turns out it was the rotting carcass of a sperm whale. Check out the CNN story, with a gross-a-licious description of the decomposition process:
When a sperm whale dies at sea, it rots until it becomes a “skeleton suspended in a semi-liquid mass within a bag of skin and blubber,” the scientists said. Eventually, the skin tears and the bones sink while the skin and blubber float.
“This washes up and has the appearance of an octopus because the spermaceti organ keeps its bulky shape,” they added.
Ew.
(Thanks to Jeff MacIntyre for this one!)
Do any of you Collision Detection readers want to be in a magazine piece?
I’m been asked to write an article about “why young guys today don’t save as much money as their folks” — or their grandparents, who basically used to save bits of string. Personal savings-rates have steadily dropped over the last few decades, while debt levels have skyrocketed, and it’s due heavily to the meager savings of folks aged 20 to 40.
So I’m interviewing any guys in that age-range who have slim savings and/or lots of debt, to find out how they got in that situation and what they think of it! (It’s guys only, because this is for a men’s magazine.) If anyone wants to share their experience for this piece, email me! I don’t bite. Heh.
A couple of days ago, I posted about a neat Google hack — the search results for “weapons of mass destruction”. In the comment field for the item, Franco pointed out that when he recently tried to seach for the goddess “Tykhe”, Google asked him if he really meant to search for the word “the”. As Franco sardonically joked: “Yes, I meant to search the entire internet for the word ‘the’ — a word which you refuse to search for.” And it’s true: Whenever you type in a search string with common words like “the” or “and”, Google strips them out. Generally, Google won’t even allow you to include “the” as a search term.
But here’s the weird thing: If you type in only the word “the” as a search, you actually do get results. When I searched for “Tykhe”, Google gave me the same response it gave Franco:
Searched the web for Tykhe — Results 1 - 10 of about 302. Search took 0.05 seconds.
Did you mean: The
So I clicked on the “the” search, and discovered it generates 3,680,000,000 results. The top-ranked search results are, in order:
The Onion
The White House
The Economist
NASA
The Guardian
AllTheWeb.com
The Weather Channel
The New York Times
The Washington Post
The Hunger Site
This is really intriguing. Since “the” is the most common word in the English language, it would — theoretically — be distributed pretty evenly around the Internet. In that case, when Google searches for “the”, it faces a unique situation. It would be very hard for Google’s semantic or key-word-matching tools to figure out which web site used the word most frequently, or in a most significant fashion. Most semantic or key-word-matching reasoning is rendered useless. And indeed, look again at the number of results: 3,680,000,000. That’s almost precisely the number of sites that Google claims to index — 3,083,324,652. Thus, the search “the” is returning results for every single page on the Internet.
In this situation, the main trick Google has to fall back on is PageRank: Its patented system for determining which sites are important, by counting the number of links that point to them. This would mean, then that The Onion — and those other nine sites — may have more links to it than most other sites on the Net. They are, in effect, the most popular sites on the Net, since PageRank popularity is clearly the main criteria — if not the only criteria — that Google is using to place them on the Top 10 list, right?
Well, maybe. Possibly the names of the sites are important, too. Notice that, except for NASA, all the sites have the word “the” in their official web-site title — and thus probably also in their meta tags, and various other semantically important bits of HTML. That may explain why The Hunger Site appears so high.
Pretty weird, eh?
Another piece I wrote for this weekend’s Boston Globe Ideas section is a short riff off of Edward Tufte’s excellent new pamphlet — The Cognitive Style of PowerPoint. A copy of my piece is online at the Globe site, but here’s a permanent archived copy too:
This is your brain on PowerPoint
by Clive ThompsonThe business world loves PowerPoint. Walk into any office meeting today, and you’ll see someone deliver a presentation using Microsoft’s “slideware” program — complete with spinning icons, animated bar-graphs, and 3-D pie charts-projected onto a screen. There are several trillion PowerPoint slides generated each year, and few corporate decisions are made without executives relying on the software.
But what if it’s actually harming our ability to think?
Such is the bold thesis of The Cognitive Style of PowerPoint, a searing little broadside released recently by Edward R. Tufte, the theorist of information design and author of the acclaimed book The Visual Display of Quantitative Information. “The PP slide format has probably the worst signal/noise ratio of any known method of communication on paper or computer screen…,” rails Tufte. “For statistical data, the damage levels approach dementia.”
Yikes. The main problem, Tufte suggests, is PowerPoint’s low resolution, which allows only a tiny amount of information per slide. The average PowerPoint slide fits about 40 words, or 8 seconds worth of silent reading, many times less than the average paper handout. With such haiku-like limits, presenters are forced into intellectually mangling even the simplest of topics. Worse, PowerPoint relentlessly encourages its users to employ bulleted lists, which are “faux-analytical” — they can’t communicate complex relationships between points. (They are also, notes Tufte, a mirror of Microsoft’s whole corporate style: one-line-at-a-time computer code that can’t abide complexity.) Bulleted lists are about selling an idea, rather than explaining it. “What counts are power and pitches, not truth and evidence,” Tufte argues.
While PowerPoint makes it remarkably easy to include data graphics, users typically use them to convey only a few data points. But that, as Tuftes points out, negates the whole reason data graphics are useful — to allow comparisons between large amounts of data. The average data graphic in The New York Times, for example, includes 120 elements. The average PowerPoint graphic includes a mere 12 elements, not much more than those you’d find in Soviet propaganda journals. “Doing a bit better than Pravda,” he notes drily, “is not good enough.”
Except, possibly, in today’s corporate world. Sure, PowerPoint may make it impossible to convey information. But that’s just fine, if — like so many corporate drones we’re forced to work alongside — you haven’t actually got anything to say. Maybe Microsoft understands more than we imagine about our modern Dilbert-style office life. As Tufte glumly concludes: “PowerPoint allows speakers to pretend that they are giving a real talk, and audiences to pretend that they are listening.”
As you may recall, a few months ago I wrote a short blog entry about the “he said, she said” software. A bunch of Israeli artificial-intelligence experts wrote a program that can automatically analyze a piece of anonymous text — and tell you whether it was written by a man or a woman.
The Boston Globe’s superb weekend “Ideas” section asked me to write a story about it, and it appeared in yesterday’s section. It’s online here, and a full copy is below!
He and she: What’s the real difference?
According to a team of computer scientists, we give away our gender in our writing styleby Clive Thompson, 7/6/2003
Imagine, for a second, that no byline were attached to this article. Judging by the words alone, could you figure out if I were a man or a woman?
Moshe Koppel could. This summer, a group of computer scientists — including Koppel, a professor at Israeli’s Bar-Ilan University — are publishing two papers in which they describe the successful results of a gender-detection experiment. The scholars have developed a computer algorithm that can examine an anonymous text and determine, with accuracy rates of better than 80 percent, whether the author is male or female. It’s a computerized twist on an ancient debate: For centuries, linguists and cultural pundits have argued heatedly about whether men and women communicate differently. But Koppel’s group is the first to create an actual prediction machine.
A rather controversial one, too. When the group submitted its first paper to the prestigious journal Proceedings of the National Academy of Sciences, the referees rejected it “on ideological grounds,” Koppel maintains. “They said, ‘Hey, what do you mean? You’re trying to make some claim about men and women being different, and we don’t know if that’s true. That’s just the kind of thing that people are saying in order to oppress women!’ And I said ‘Hey — I’m just reporting the numbers.”’
As most geeks know, playing with Lego is superb training in math and geometry. Indeed, many schools now explicity offer “Lego and math” classes. When you have to calculate the number and type of bricks necessary to make a weird shape, or when you try to create a curve out of square bricks, you quickly run into concepts like fractions, exponents, and squares, cubes and roots. When I first learned about the idea of logarithmic scale, it made immediate sense — because I’d once had the same idea while plotting out brick space on a big flat Lego pad.
Now, Andrew Lipson … this guy’s nuts. Though I mean that in a good way. He essentially performs Xtreme math using Lego bricks: He’s devoted the last few years to developing Lego models of famous geometric shapes, including the Moebius strip (pictured above), the Moebius-like “Klein Bottle”, and the incredibly weird, single-surfaced Bour’s surface. Want to build them yourself? Here’s how Lipson did it:
OK, I admit it — these weren’t constructed entirely without computer assistance. Usually I write some C code to generate whatever the shape is and figure out which cells in a grid made up of 1x1x1 LEGO bricks should be filled in. The code outputs this as an LDraw .DAT file, separated into construction steps adding one complete layer of the structure in each step. Then I use MLCad to view the .DAT file. I play around with the parameters and repeat until I have something that looks nice and which will probably be able to balance.
Okay, this rocks like a Rush concert: Speakeasy, the incredibly forward-thinking ISP, has set up Netshare — a new system that lets you charge your neighbors small amounts of money to dip into your Wifi signal.
Say you’re living in an apartment building or neighborhood with good radio-wave access to a dozen other households. You could sign up for a $50-a-month DSL account, set up a Wifi node, and share the signal with four other paying neighbors. Speakeasy takes care of the billing, charging everyone their fraction. Presto: You’re all getting broadband for $10 a month, and saving about $500 a year each.
Sure, there’ll be the occasional crunch in bandwidth if everyone tries to download 100-meg movie files at the same time. But quite frankly, how likely is that to happen? My girlfriend and I have been sharing a Wifi node for months now, and we both download high-bandwith stuff — music files, online video, big pieces of software — and I’ve never once noticed a problem.
This is a brilliant, brilliant, brilliant idea. I hope other ISPs pick up on it, but I’m not holding my breath.
(Thanks to Slashdot for pointing this one out!)
So, this huge mass of incredibly strange gelatinous stuff washes up on the shores of Chile, and oceanologists have no clue what the heck it is. From CNN.com:
The dead creature was mistaken for a beached whale when first reported about a week ago, but experts who went to see it said the 40-foot-long (12-meter) mass of decomposing lumpy grey flesh apparently was an invertebrate.
“We’d never before seen such a strange specimen, We don’t know if it might be a giant squid that is missing some of its parts or maybe it’s a new species,” said Elsa Cabrera, director of the Center for Cetacean Conservation in Santiago.
“Maybe it’s a new species?” I’ve said this before and I’ll say it again: The ocean is just plain weird. We’re busy spending quadrillions of dollars trying to find out if, like, a dozen microbes maybe once lived on Mars sometime in the ancient past. Meanwhile, the entire flippin’ ocean is filled with things that are the size of your front lawn and yet which no-one before has ever even seen.
Last week I posted about SpamArrest — a popular new way to screen out spam. If you sends an email to somebody for the first time, SpamArrest asks you to pass a Reverse Turing Test to prove you’re really human. It sends you to a stretched and distorted graphic image of a word, and asks you to identify it (that picture above is an example ). Theoretically, this neatly seals out the spam ‘bots, because they can’t visually identify graphics — that’s a uniquely human ability. Q.E.D.
Unless, of course, you’re blind — and you’re using a text-to-speech converter to surf the web. In that case, this test screws you, as a story in today’s CNET points out:
It seems that they have jumped on a technological idea without thinking through the consequences for the whole population,” said Janina Sajka, director of technology research and development for the American Foundation for the Blind in Washington, D.C. “These systems claim to test whether there’s a human on the other end. But it’s only technology that can challenge certain human abilities. So someone who doesn’t have that particular ability is excluded from participation. That’s really inappropriate.”
Precisely the point. If you’re blind and using a text-to-speech reader, you are, in essence, using a ‘bot of your own to augment your natural abilities.
And that is, when you think about it, a kind of mind-blowing gloss on the whole idea of a Turing Test. After all, the original Turing Test tries to to differentiate between humans and machines. The problem is that it assumes that these are two totally separate species. These days, we’re increasingly becoming a cyborg race: Humans whose everyday physical abilities are enhanced or supplanted by machine sight, machine hearing, and machine cognition.
Maybe we need to add a new category to the Turing Test! The new goal should be a test that can figure out whether the person on the other side of the screen is a regular human, a regular computer … or a blend of both.
Heh. A friend just pointed this out to me. Go to Google and type in “weapons of mass destruction”, then click the “I’m feeling lucky” button — which delivers you directly to the top-ranked result for that query.
Remember the infamous dossier that the British government assembled back in the spring — purporting to outline Saddam Hussein’s lockhold on weapons of mass destruction? Remember how Colin Powell waved it around during his UN presentation? And then remember how the document was later discovered to have been plagiarized — down to the mistakes in spelling and punctuation — from a grad-student paper?
It seems the detective work behind that scandal was made possible by Microsoft Word. IT geek Richard Smith was the one who uncovered the plagiarism, by analyzing the invisible track marks that Microsoft Word leaves whenever it opens or saves a document. He took the dossier off a British government web site where it was publicly available, and looked at its “revision log”. Smith describes the process on his personal web site:
Most Word document files contain a revision log which is a listing of the last 10 edits of a document, showing the names of the people who worked with the document and the names of the files that the document went under. Revision logs are hidden and cannot be viewed in Microsoft Word. However I wrote a small utility for extracting and displaying revision logs and other hidden information in Word .DOC files.
It is easy to spot the following four names in the revision log of the Blair dossier:
P. Hamill
J. Pratt
A. Blackshaw
M. KhanIn addition, the “cic22” in the first three entries of the revision log stands for “Communications Information Centre,” a unit of the British Government.
Smith passed his findings on to a friend, and they eventually got to a reporter who recognized the names of officials — including the personal assistant to Tony Blair’s press secretary. Blair is probably banning all copies of Microsoft Word from government as we speak.
It is, of course, one of the most remarkable things about Word: It leaves its fingerprints over everything it touches. Word is like a software version of the Heisenberg Uncertainty Principle — because the mere process of opening a document to read it irrevocably changes the document before it even hits the screen. Word is not merely a vehicle for reading and writing letters; it’s an active participant, an author/editor who can’t stop messing with your text. This is partly why Corel’s WordPerfect has a lockhold on the legal sector. When WordPerfect opens a document — even one authored with a very old version of WordPerfect — it doesn’t actually alter any specs on the document. This is incredibly important in forensics, because if you want to use something as legal evidence, you have to prove you haven’t messed with it yourself. If lawyers were to use Word to open crucial documents, they would risk making those documents inadmissible in court — because when Word tries to open a file made with an older version of Word, it immediately begins giving it a nasty little nip-and-tuck in an attempt to “improve” it and bring it up to 8.0 or 9.0 or 67.0 or whatever the heck version Word is currently on.
It reminds me of a game I used to play with my geekier friends: What’s Your Favorite Old Version Of Word? Or rather, what’s the last version you used that didn’t overwhelm you with a million useless features? For me, it’s Word 4.0 — a fine, fine vintage from circa 1995. It had all the stuff I needed — like word count and basic formatting — but none of the studied inanity of today’s “Clippy”-style bloatware, which has transformed Word from being a terrific word processor to being a finely-tuned experiment in irritation worthy of Stanley Milgram.
How about you guys? Which is your favorite version of Word?
(Thanks to Nathan at The Age for pointing me towards this one!)
I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).
ECHO
Erik Weissengruber
Vespaboy
Terri Senft
Tom Igoe
El Rey Del Art
Morgan Noel
Maura Johnston
Cori Eckert
Heather Gold
Andrew Hearst
Chris Allbritton
Bret Dawson
Michele Tepper
Sharyn November
Gail Jaitin
Barnaby Marshall
Frankly, I'd Rather Not
The Shifted Librarian
Ryan Bigge
Nick Denton
Howard Sherman's Nuggets
Serial Deviant
Ellen McDermott
Jeff Liu
Marc Kelsey
Chris Shieh
Iron Monkey
Diversions
Rob Toole
Donut Rock City
Ross Judson
Idle Words
J-Walk Blog
The Antic Muse
Tribblescape
Little Things
Jeff Heer
Abstract Dynamics
Snark Market
Plastic Bag
Sensory Impact
Incoming Signals
MemeFirst
MemoryCard
Majikthise
Ludonauts
Boing Boing
Slashdot
Atrios
Smart Mobs
Plastic
Ludology.org
The Feature
Gizmodo
game girl
Mindjack
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
ECHO
SciTech Daily
Arts and Letters Daily
Textually.org
BlogPulse
Robots.net
Alan Reiter's Wireless Data Weblog
Brad DeLong
Viral Marketing Blog
Gameblogs
Slashdot Games