Can Facebook Peer Pressure Get People to Vote?
– Erin Amato
‘Digital gerrymandering’ at its finest.
Unbeknownst to American Facebook users, many of them became part of a civic-engineering experiment during the 2010 congressional mid-term elections. The result? Otherwise apathetic people decided to vote.
On November 2, 2010, Facebook graphics became voter-friendly for the congressional midterm elections. Political scientists teamed up with Facebook to produce a graphic containing a link for looking up polling places, as well as social incentives to vote: buttons to announce a user had just voted and profile pictures of at most six Facebook friends who also followed suit.
Since then, in what Jonathan Zittrain, professor of law and computer science at Harvard, calls “an awesome feat of data-crunching,” these political scientists have compared the names of the targeted Facebook users with voting records from precincts nationwide.
How much did the voting prompt increase turnout? Users notified of their friends’ voting became .39 percent more likely to cast a ballot. “The researchers concluded that their Facebook graphic directly mobilized 60,000 voters, and, thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast that day,” Zittrain writes. “As they point out, George W. Bush won Florida, and thus the presidency, by 537 votes — fewer than 0.01 percent of the votes cast in that state.”
The implications of social media get-out-the-vote efforts are huge: What if Mark Zuckerberg favors one candidate over another? Facebook dwellers with similar political views could be bombarded with subtle messages urging them to the polls. After all, Facebook “likes” can indicate political affiliation (either explicitly or implicitly stated). Those with opposing views, meanwhile, might not receive an indication via Facebook that it was Election Day.
Social engineering could change the outcome of the next election. “Digital gerrymandering” can filter out political appeals to unsympathetic users to ensure the right eyes see them. The deception lies in the user’s inability to distinguish an advertiser-sponsored link from regular advertising. By personalizing the advertising content its users see, Facebook and other social networking sites can “leverage their awesome platforms to influence policy.”
And it’s all legal. The company's disclosure policies clearly reserve the right “to season their newsfeeds and search results however they like.” Furthermore, efforts to sway Facebook users require no disclosure, since they could be construed as covered under the policy.
Zittrain believes it would be “ill-advised” to pass a law prohibiting digital gerrymandering. Content producers have free speech rights, too: They can tailor results as they see fit. It’s a slippery slope argument to censor names from search results or content algorithms, because doing so limits information for all users.
Instead, Zittrain proposes a better solution: entice web companies storing personal data to act as “information fiduciaries,” similar to how a doctor or lawyer handles sensitive information. “Shouldn’t we treat certain online businesses, because of their importance to people’s lives, and the degree of trust and confidence that people inevitably must place in these businesses, in the same way?” asks Jack Balkin of Yale Law School.
Information safeguards should go beyond following a privacy policy, says Zittrain. He proposes that web companies be required to keep “audit trails reflecting when the personal data of their users is shared with another company, or is used in a new way.” This option allows users to filter search data and omit targeted ads. The information fiduciary would be compelled not to put its own interests before that of its users, much like a financial adviser’s relationship to a client.
Zittrain would support tax breaks and certain legal immunities for web companies that elect to enhance their accountability to users. If companies allow users to “opt in” to receiving tailored content instead of “opting out” of this commonplace practice, users become less susceptible to the covert coercion involved in the 2010 get-out-the-vote experiment.
Zittrain notes that this sort of thing has happened before. In 1974, the Federal Communications Commission strictly forbade airing subliminal messages over TV advertisements in response to public panic. Why shouldn’t the same user protection be afforded to today’s most popular medium? Zittrain strongly supports the idea of a “happy medium” — one that which does not restrict First Amendment free speech protections, yet safeguards web users from covert attempts to sway their political views.
The Source: “Facebook Could Decide an Election Without Anyone Ever Finding Out” by Jonathan Zittrain. The New Republic, June 9, 2014.
Photo courtesy of Flickr/goiabarea