Facebook’s problem is its business model. This is because, fundamentally, their problem is not a failure of technology, nor a shortcoming in their AI filters. Even when Facebook takes occasion to announce its triumphs in the ethical use of AI, such as its excellent work detecting suicidal tendencies, its advancements pale in comparison to the inherent problems written into its algorithms. LeCun’s comments confirm the concerns that many of us have held for a long time: Facebook has declined to resolve its systemic problems, choosing instead to paper over these deep philosophical flaws with advanced, though insufficient, technological solutions. The entire exchange also suggests that senior leadership at Facebook still suffers from a massive blindspot regarding the harm that its platform causes-that they continue to “move fast and break things” without regard for the global impact of their behavior. As evidence of his claims, he directed my attention to a single research paper that, on closer inspection, does not appear at all to reinforce his case. In anticipation of my response, he offered the claims highlighted above. What proof did I have that Facebook creates filter bubbles that drive polarization? So when LeCun reached out to me, demanding evidence for my claims regarding Facebook’s improprieties, it was via Twitter. I deleted my account years ago for the reasons noted above, and a number of far more personal reasons. In response to a frenzy of hostile tweets, LeCun made the following four claims: Consistent with Facebook’s recent history on such matters, LeCun was both defiant and unconvincing. Critics were quick to point out that Facebook has profited handsomely from exactly this brand of disinformation. So it naturally raised more than a few eyebrows when Facebook’s Chief AI Scientist Yann LeCun tweeted his concern over the role of right-wing personalities in downplaying the severity of the COVID-19 pandemic. If Facebook employed a business model focused on efficiently providing accurate information and diverse views, rather than addicting users to highly engaging content within an echo chamber, the algorithmic outcomes would be very different.įacebook’s failure to check political extremism, willful disinformation, and conspiracy theory has been well-publicized, especially as these unseemly elements have penetrated mainstream politics and manifested as deadly, real-world violence. This is why solutions based in technology have failed to stem the tide of problematic content. Facebook’s problem is not a technology problem. By creating “filter bubbles”-social media algorithms designed to increase engagement and, consequently, create echo chambers where the most inflammatory content achieves the greatest visibility-Facebook profits from the proliferation of extremism, bullying, hate speech, disinformation, conspiracy theory, and rhetorical violence. Facebook’s stated mission is “to give people the power to build community and bring the world closer together.” But a deeper look at their business model suggests that it is far more profitable to drive us apart.
0 Comments
Leave a Reply. |