This week, we bring you more of the latest social media news to keep you on your toes and, most importantly, up-to-date with all things social!
MPs demand social media companies block communications from unverified accounts
Following racist abuse faced by footballers following the 2020 Euros, a petition demanding compulsory photo ID verification accrued more than 500,000 signatures from concerned members of the public.
The bid to make such verification compulsory was rejected by the petitions committee, citing concerns that it could unduly target/curtail vulnerable groups freedom of expression (often called “the chilling effect”). However, they made the recommendation that users be given the option to voluntarily verify and to block all incoming communications from unverified users.
Social media companies would then have to demonstrate to Ofcom that they had taken “proportionate steps” to ensure adults were protected from “legal but harmful” abuse online. What exactly could be considered harmful and abusive, but somehow legal remains unclear at the moment.
WhatsApp launches advertising campaign centring encryption
Meta (way back when it was called Facebook) initially announced plans to integrate all of their messaging apps in 2019. A side-effect of this was that every app needed to conform to WhatsApps levels of encryption, which apparently takes about 3 years and one pandemic.
Government agencies and Law Enforcement groups across the globe have raised concerns that this might limit their capability to investigate people’s private conversations, but have assured that they only do so if they’re the baddies. However, the EU has countered that stronger encryption is likely to protect users from threats of blackmail and other kinds of cybercrime.
Conversely, in the years since WhatsApp was acquired by Facebook there has been growing concern over user’s data being shared. WhatsApp co-founder Brian Acton stated in 2020 “I sold my users’ privacy to a larger benefit”. This new move suggests that social media companies certainly won’t share users’ private data – unless they really really want to, then they might.
The Royal Society issues report on viral virus content
Following a rash of COVID misinformation, The Royal Society (a name which feels like it should have more words in it*) have suggested that the problematic aspects of removing content that is “legal but harmful” may not outweigh the potential benefits. The Society stressed that science is a process of dispute and change, and that any perceived censorship is antithetical to the scientific method.
However, The Centre for Countering Digital Hate (that’s plenty of words…) countered this, pointing to a video titled “Plandemic” which went viral in 2020, spreading disinformation about vaccines & masks before eventually being taken down after it was deemed both harmful, and difficult to monetise. Its sequel (aptly titled “Plandemic 2”) was restricted much more heavily and failed to make the same mark, a bit like Mean Girls 2.
*they do science stuff by the way
Sources:
https://www.bbc.co.uk/news/technology-60036861
https://www.telegraph.co.uk/news/2022/02/01/block-social-media-trolls-refuse-provide-mps-demand/