Many people probably saw the news that Facebook allegedly privileges left-leaning stories in its trending news section, a story broken by Gizmodo at the beginning of this month. The BBC builds on this report to explore how what we see online (and the various ways in which this get tailored more and more specifically to us) affects our behavior. “[I]t is worth remembering that the designers of the technology we use have different goals to our own – and that, whether our intercessor is an algorithm or an editor, navigating it successfully means losing the pretense that there’s any escape from human bias.”
In a long article in The Verge, authors Catherine Buni and Soraya Chemaly detail the hidden world of content moderation on the Internet. They outline a history of moderation, and the levels of seriousness with which which various content hosts approach moderation (Pinterest versus Reddit versus Facebook, etc), raising questions along the way about the role of moderation in politics (when does the newsworthiness of a video depicting violence outweigh general guidelines prohibiting violence, what role do moderators play, wittingly or no, in social and political movements), free speech and the legal implications and histories of moderation, the unpaid and unrecognized labor that users themselves do to moderate content, and the undervaluing and off-shoring of the very grueling and mentally taxing work of moderation. The Secret Rules of The Internet
The Guardian has an article about the secrecy of research about online harassment and bullying, in which the author argues that Victorian social movements for food safety can teach us something about how to make the Internet a safer place for everyone, while acknowledging the that, “The underlying causes of online harassment can’t be solved by detecting and banning a few toxic commenters.”
In fact, a great number of recent articles have worked to illuminate the fact that data itself is, of course, not unbiased, and that decisions made based upon big data frequently end up further entrenching historical and systemic inequalities. For instance, Amazon says it decides where to offer Same-Day Delivery based upon the number of Amazon Prime users in a particular area; Bloomberg finds that these maps fall along socioeconomic and racial fault lines within and among cities. ProPublica has an article about the algorithms that are used to do “risk assessment” – a score that is supposed to predict how likely someone convicted of a crime is to recommit; these algorithms, they find, are racist and wildly inaccurate. Quartz, following up on ProPublica’s report, has a few other examples of how “the big data revolution is magnifying how racist we are.”
Death by GPS is an article in Ars Technica about how GPS is changing our brains, the way we process space, and make decisions. It also touches on the phenomenon of people relying so much on their GPS that they do not recognize that it might be leading them into danger. In this essay in Aeon, the author considers how and why we continue to insist on the metaphor likening our brains to computers (to which they bear no real resemblance) and explores a brief history of what our brains used to be likened to—clay, machines, etc.—and why.
Shifting gears slightly to more health-related news, The Atlantic has a short but interesting article on the World Community Grid, which anyone can sign up for and donate their “spare” computational power to solve very large computing problems—in this case “checking a library of 100 million chemical compounds to see how each individually reacts to a model of Zika proteins.”
Quartz has a long article on allergies, a history of the theories to explain them, as well as an in-depth look at a few immunologists who are proposing a new theory—allergies as alarm system rather than overreaction. And, The Guardian has an article on planetary health and its relation to epidemic prevention and treatment; “With planetary health, we have an opportunity to redefine prevention to include upstream solutions that safeguard the environment.”
Unrelated but noteworthy:
In a nod to Geertz’s the difference between a wink and a twitch (and tangentially related to last month’s Web Roundup), Moira Weigel in The New Republic argues that the ability to flirt well and appropriately is the challenge to determine whether machines can learn, and concludes that the efforts being made to teach robots to flirt will eventually mean they can be employed for jobs requiring emotional, not just physical, labor. In another interesting articles, in the New York Times, Weigel argues that there is a relationship between the shift in our economic and romantic lives; “If you want to understand why “Netflix and chill” has replaced dinner and a movie, you need to look at how people work.”
538 – Who Will Debunk the Debunkers? A quirky and interesting article about myths and super-myths.
The Guardian – The foul reign of the biological clock