You’d be forgiven for thinking that beneath the assiduous fact-finding by leading news brands like the New York Times, The Guardian and Channel 4 in the UK, there’s a slight sense of glee in their reporting on Facebook’s connection to the Cambridge Analytica scandal.
Meanwhile, writers in less-mainstream media have been busy trying to tie together the various news strands that could be connected to the scandal – including the Skripal double-agent poisoning and the sharing of Facebook user data with Kremlin-linked entities, the use of British offshore financial entities by Russian oligarchs and their relationships with the British Conservative Party and the U.S. Republicans, and links to various political campaigns and to intelligence and military contractors around the world.
But politicians, interest groups, industry players etc have always used multiple media and a full armoury of communications techniques to promote their views – including the spread of misinformation - to target specific demographies and influence outcomes.
What’s different, is the degree of managed exploitation of personal information at scale: using micro-targeting on big data, and the lack of regulation of online-only platforms and services, as media. This has been dubbed ‘surveillance capitalism’ via ‘platform monopolies.’
The new General Data Protection Regulation (GDPR), which will become enforceable on May 25 2018, mandates the right to data portability in the EU, and a limited ‘right to be forgotten’ for users. This will give users choices and control that may help balance the asymmetrical relationship between them and providers.
In a recent white paper, the New York University Stern Center for Businesses and Human Rights suggested the social media companies do more to regulate content. Previously, these platforms leaned towards no self-regulation and no legal liability – unlike newspapers which choose which news to publish, within established legal frameworks.
It cites a fear of aggressive government regulation’s ability to generate an over-reaction by companies and individuals to avoid punishment – resulting in interference with the free expression that is one of the benefits of social media.
The report doesn’t push to make social networks liable for information users share on the platform, but suggests that legislation to apply the same laws to social media adverts that apply to political advertising on TV and radio, would be reasonable.
In some cases, social media platforms themselves step up to protect users, like with Twitter’s recent rule overhaul, or to keep advertisers, like in YouTube’s recent changes after big advertisers boycotted the video platform. But, in other cases, such as Germany’s new hate speech law and a potential new similar European Union law, moderation is government-mandated.
Criticism for hate speech, extremism, fake news and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms.
An expansion of Facebook’s review staff joins a handful of other changes social media companies have now launched. Twitter has booted hate groups, YouTube is adding additional human review staff and is expanding algorithms to more categories, and Zuckerberg has made curbing abuse on Facebook his goal for 2018.
Finally, the group suggests identifying exactly what the government role should be, in the process.
It will be interesting to see how the regulatory environment develops …