Science —

Users are still idiots, cough up personal data despite warnings

What does it take to get users to cough up potentially embarrassing …

Study after study has shown that users are the weak link when it comes to security. Some of it, however, is not their fault: best security practices often go against everything we know about human behavior or mental capacity. A study that will be published in the Journal of Consumer Research adds another one to this list. It turns out that the warning signs that might tip users off to a web site that's more likely to compromise their personal information actually causes many users to treat said information casually.

The authors approached the issue with a simple question: what motivates people to reveal personal information on the Internet? Understanding the phenomenon could go a long way towards explaining everything from blogging to phishing victims, but the authors chose to focus specifically on whether people would hand over embarrassing personal information, including sexual habits and illegal acts. After several rounds of tests, they conclude, "A central finding of all four experiments, is that disclosure of private information is responsive to environmental cues that bear little connection, or are even inversely related, to objective hazards."

That's a bit of a surprise, given the general assumptions used in the online security field. Most studies have assumed people were rational actors, that carefully balance privacy issues against a host of competing factors, such as convenience, rewards, and so on. But there have been some results, the authors note, that suggest this view is mistaken; for example, they cite research that suggests people are actually less likely to complete a survey if they're provided assurances that their answers would be kept confidential.

To delve into these issues, the researchers first had to demonstrate that they could get random individuals to cough up personal information. To get there, they collaborated with The New York Times to create a web survey entitled "Test your ethics," which asked participants to rate the ethicality of a set of actions. But, in the process, users were asked to indicate whether they had ever engaged in those activities, under the pretense that it might color their ratings.

Answers varied a great deal based on the perceived intrusiveness of the question, but one pattern became clear: it was possible to get more people to answer that they had engaged in a given behavior if their own behavior was approached indirectly. If participants were asked about their participation as part of the rating process, they were about 1.5 times more likely to admit an ethical misstep than if they were simply asked point blank as a separate question. This suggested that a casual approach, which puts a participant at ease, is more likely to get them to cough up personal details.

The next few experiments took full advantage of this finding.

The researchers set up two survey web pages, one of which looked very official: it had the Carnegie Mellon University seal, and referred to a "Carnegie Mellon University Executive Council Survey on Ethical Behaviors." The other, well... Comic Sans featured heavily in the site design, and the survey page was entitled "How BAD Are U???" In a pre-test, far more people rated the official-looking page as a safer option for transmitting personal information.

When put to the test, however, the exact opposite occurred. Depending on the question, participants who used the How Bad ARE U version admitted to unethical or embarrassing activities at a rate of 1.74 to 1.98 times that of those who were given the professional version. In a separate survey, participants rated the same questions as less intrusive if they were presented in Comic Sans—even though there was no difference in the ratings of the activity's social desirability between the two survey populations. In short, an unprofessional-looking interface seemed to loosen participants up in the same manner that approaching a question indirectly did.

Is it at all possible to overcome this? In their final experiment, the researchers did a bit of what they termed "privacy cuing" by asking some of the participants to identify web pages that were phishing attempts before hitting them with the ethics survey. (In a somewhat amusingly designed control, other participants were asked to identify photos of endangered fish species.) Only in the cued in population were there equal rates of disclosure between the Carnegie Mellon and Comic Sans groups.

Of course, phishing attempts are unlikely to ever come with a cue to make users think of security, so it's not clear how much good this does us.

The authors recognize several limitations of their study, noting that it was limited to salacious facts that people might be more likely to lie about. They'd clearly like to extend their work to information that's more directly related to security issues, like passwords or PINs.

But the study does show that it's pretty easy to manipulate users into being more or less likely to divulge personal information, and that users can be fooled by cues that are the exact opposite of those recognized by an independent observer. Which is precisely the reason that maintaining high security standards can be so difficult.

Journal of Consumer Research, 2010. DOI: 10.1086/656423  (About DOIs).

Channel Ars Technica