Artificial Intelligentsia

How the Internet is fitting its users with mental eyeglasses— and letting them see new vistas of knowledge in the process

In one way I know of, the Internet has improved my personality. In the olden days, I would get annoyed (and show it) when I heard a song I knew but couldn’t remember who was singing, or when I channel surfed across an old movie and wondered who a familiar-looking character actor was (who is the creepy guy who tussles with Patrick Swayze on the subway in Ghost?). The question would lodge in my brain and make me cranky until the answer popped up hours or days later, or until I forgot about it. Now, if I’m at a computer, I can scratch the mental itch in seconds (answer: the late Vincent Schiavelli), and even while walking around I can query a search engine from my PDA.

In another way I’m all too aware of, the Internet has worsened my disposition, or at least my ability to behave like a grown-up. In principle, time away from a broadband connection should be precious time, because there are fewer distractions. In reality, it makes me nervous, because there’s not a new link to click on or a blog update to check (for other people, being out of e-mail range for even a minute is anxiety provoking). Nearly ten years ago, Linda Stone, then an executive with Microsoft, introduced the term continuous partial attention to describe this modern predicament—and that was before BlackBerries and WiFi. Now, the sign of a serious meeting is whether participants are forced to turn off their PDAs and laptop WiFi receivers.

No doubt technology is also changing our behavior in ways we ourselves may not be aware of but that are obvious to outsiders. For instance, I can guess from two blocks away that a driver is talking on a cell phone, and I’m rarely wrong. And everyone who checks e-mail on a handheld device thinks it can be done discreetly, but no one who is present when this happens is fooled. Certainly not my wife, who will scream the next time I glance at my BlackBerry while “listening” to her.

What I’m leading up to is a consideration less of these immediate personality changes than of the long-term interaction between human and machine intelligence—from the side of the equation that usually gets less attention. Since the first mammoth devices were assembled, during World War II, people have struggled to make computers “smarter,” and have speculated about how smart they might ultimately become. Fifty years ago, the British mathematician Alan Turing said that computers would be considered fully intelligent when they met this test: a person would submit statements in natural language—“Who’s going to win the next election?” “My husband seems distant these days—and wouldn’t be able to tell whether the responses came from another person or a machine. No computer has ever come close to passing this test. Recently, though, the inventor Raymond Kurzweil made a public bet with Mitchell Kapor, the founder of Lotus, that a computer would pass the Turing test by 2029. Kurzweil’s essential argument (derived from his book The Singularity Is Near) was that as computers kept doubling in speed and power, and as programmers continually narrowed the gap between machine “intelligence” and human thought, soon almost anything would be possible. Kapor’s reply was that human beings differed so totally from machines—they were housed in bodies that felt pleasure and pain, they accumulated experience, they felt emotion, much of their knowledge was tacit rather than expressed—that computers would not pass the Turing test by 2029, if ever. (Their back-and-forth exchanges, with views from others, are available at www.kurzweilai.net.)

A recent variant of this argument concerns whether the Internet is already fostering an unanticipated and important form of artificial intelligence. The 2004 book The Wisdom of Crowds, by James Surowiecki of The New Yorker, is the clearest explanation of this development, variously known as “collective intelligence” or “the hive mind.” The logic here is almost identical to that of Adam Smith–style capitalism. Smith argued that millions of buyers and sellers, each pursuing his own interest, would together produce more goods, more efficiently, than any other arrangement could. The Internet has made possible a similar efficient marketplace for ideas, reputations, and information online. Millions of bloggers create links to other sites and thereby cast marketplace votes for the relevance and plausibility of those sites. Thousands of editors refine each other’s entries in Wikipedia (as described last month in these pages by Marshall Poe). Together, these and other suppliers of collective intelligence can create more knowledge, with less bias and over a wider span of disciplines, than any group of experts could.

Or so it is claimed by most leaders of Internet companies. The belief in the efficacy and accuracy of collective wisdom is practically as central to the growth of today’s Internet companies as the belief in the efficiency of the market is to the New York Stock Exchange. In different forms it lies behind Google’s “PageRank” system for assessing Web sites, eBay’s measures of a merchant’s trustworthiness, Amazon’s book recommendations, and other hallmarks of the modern Internet industry. “A way to look at these social filtering systems is to think of them as generating millions of ‘most popular’ lists, and finding the most-popular items for people like you,” Greg Linden, who designed Amazon’s recommendations system, told me in an e-mail. “Normal best-seller lists are uninteresting to me because my tastes are not generic. But a best-selling list for people who buy some of the same books I do—that is likely to be interesting.”

Recently Jaron Lanier, an essayist on technology, launched a broadside against this faith and set off a major debate within the tech community. At the end of May the online publication Edge published Lanier’s essay “Digital Maoism,” which predicted that collective intelligence would have the same deadening and anticreative effect as political collectivism in general. The heart of this argument was that measures of mass popularity could be accurate in certain limited circumstances, but not in a large variety of others. Edge also published many rebuttals, and the debate goes on. The opposing camps and positions are amazingly similar to those in the endless economic debate between libertarian free-market absolutists, who think that any market outcome must be right, and those who say, “Yes, but …” and start listing cases of market failure.

My sympathies are with Lanier, but here is the intriguing part: even as we debate the limits on how much, and how many kinds of, intelligence human beings can ultimately build into their networks and machines, we have to recognize what computers can do already—and how that eventually may change us.

The most obvious and unquestionable achievement in Internet “intelligence” is the Jeopardy!-style retrieval of “spot knowledge.” If you want to know what other movies Vincent Schiavelli was in before and after Ghost, any search engine will point you to a list. As computing power becomes smaller, cheaper, and easier to embed in other products, nearly everything we use will eventually come with the ability to pull relevant data from search engines. The GPS receivers in cars that tell us about the restaurant or rest stop we’re passing are early indicators. Refrigerators will retrieve recipes for the ingredients inside; household appliances will download pages from repair manuals.

There is a second area in which Internet-borne knowledge is becoming steadily more impressive: categorization, or pattern recognition. In general, deciding how different things are similar, and similar things are different, is extremely difficult for computers. Any three-year-old can instantly tell a cow from a horse; few computers can. But related developments in several search engines have provided the beginnings of useful machine-created categorization.

A search engine called Clusty, founded by Carnegie Mellon computer scientists and based in Pittsburgh, returns its search results grouped by topic category. Type in “theory of evolution,” for instance, and it will tell you which sites discuss Charles Darwin, which cover modern developments in the theory, and which discuss its relationship with the Bible. An experimental search engine developed at the University of Maryland, at tinyurl.com/qkpht, also provides useful categorized results. Ask.com, formerly known as Ask Jeeves, has a very useful “Zoom” feature. Type in “theory of evolution” there, and it will suggest that you might want to narrow the search for information about the mechanics of natural selection, or broaden it to a general query about the beginnings of life. Cnet’s news site has a feature called “Big Picture.” After you enter a query, it produces a concept map showing the related topics to explore. Grokker was a pioneer in producing such concept maps, and remains a useful and reliably interesting way to display the overlaps and divergences among items discovered by a Web search. Raymond Kurzweil’s site employs an idea map of its own. When I’m doing a search to develop a theme and not to check spot knowledge, I have found that these categorizing sites—especially Ask.com—save me time in getting where I want to go.

If omnipresent retrieval of spot data means there’s less we have to remember, and if categorization systems do some of the first-stage thinking for us, what will happen to our brains? I’ve chosen to draw an optimistic conclusion, from the analogy of eyeglasses. Before corrective lenses were invented, some 700 years ago, bad eyesight was a profound handicap. In effect it meant being disconnected from the wider world, since it was hard to take in knowledge. With eyeglasses, this aspect of human fitness no longer mattered in most of what people did. More people could compete, contribute, and be fulfilled.

Something similar has been true with most mechanical inventions: sheer muscle power has mattered less, except for sex appeal, and people weak or strong have been able to live fuller lives.

It could be the same with these new computerized aids to cognition. I love winning when I play along with Jeopardy!, so of course I think that spot recall is a crucial part of human smartness. But I know that America’s university system has been the envy of the world in part because it relies so little on rote memorization. When my father was a practicing physician, I often heard him stress the value of clearing his mind of minutiae he could easily look up. Increasingly we all will be able to look up anything, at any time—and, with categorization, get a head start in thinking about connections.

I can think of one group of people for whom abundant, available facts could serve as eyeglasses: the vast majority of us who are destined to have a harder time remembering as the years go on. Imagine if every photograph came with a touch screen describing what it shows and when it was taken, or if reminders about appointments to keep, pills to take, and similar data could be so embedded in our lives that we wouldn’t have to worry about forgetting.

For those without such problems, these new tools could, while perhaps less immediately essential, yet become the modern-day equivalent of the steam engine or the plow—tools that free people from routine chores and give them more time to think, dream, and live. Each previous wave of invention has made humanity more intelligent overall. These may seem like fighting words, but consider: America may have fewer Jeffersons, Adamses, and Franklins now, but it does have many more people who are literate and can aspire to goals beyond mere survival. The next wave of tech innovation, if it is like all the previous ones, will again make us smarter. If we take advantage of its effects, it might even make us wiser, too.

James Fallows is a contributing writer at The Atlantic and author of the newsletter Breaking the News.