As Twitter rolled out a limited test to record audio tweets and attach those to the original tweet, the concerns are now been raised on how the company will moderate such tweets as tackling hateful, abusive or racist audio messages require more efforts that using AI to curb disinformation on normal tweets.
One good thing is that audio can only be added to original tweets and the users can’t include those in replies or retweets with a comment.
This makes the job a bit easier to find a person who posts a bad audio tweet, and the moderators swing into action to block or flag his tweet or account.
However, unlike Facebook which currently has over third-party 15,000 content moderators policing its main app as well as Instagram, Twitter has a small team of human moderators.
In case of an audio tweet, one has to listen to it to reach a conclusion if the voice tweet contains inflammatory or abusive content which then needs to be flagged.
Or AI models get on to the job to go through audio tweets but then, how are they supposed to scan voice tweets in various languages?
Even Facebook moderators do blunders. Tasked with reviewing about three million posts a day, Facebook moderators make about three lakh mistakes in 24 hours in deciding what should stay online and what should be taken down, according to a new report from New York University’s Stern Center for Business and Human Rights.
The number of blunders was derived on the basis of a statement made by Facebook CEO Mark Zuckerberg in a white paper in November 2018. The Facebook CEO admitted that moderators “make the wrong call inmore than one out of every 10 cases.”
According to a report in Vice, at a time when online platforms are struggling to remove misinformation and fake content, audio tweets may be “a new mechanism to harass people”.
According to media reports, Twitter has far fewer human moderators than other social media giants, so adding such a labor-intensive type of content to moderate seems like it could go poorly.
In the case of Facebook, the research found that to efficiently sanitise the platform. Maybe in order to increase the output of the sanitization or content moderation, Facebook needs to increase resources for such content moderation outsourcing. By looking at the present day scenario, facebook or other social media platforms need to double the number of people who moderate the content on a daily basis and significantly expand fact-checking to debunk misinformation.
While adding inputs on this issue, Deepu Madhavan Operations Head of New Delhi based Aqshara Content Solutions says, “It is important to understand the criticality of the impact that locally generated content has. We are working on this for the past few years and have been working closely with some of the leading content platforms working pan-India; apart from content creators and curators of all sizes. In our experience to get the best experience of local content – a pair of human eyes must pass over them and if possible with some experience of content safety and knowhow of local content taste. There is no doubt that there is a huge amount of content generated online. Thus the need for very strong content sanitization and content moderation strategy exists. AI-based tools have a lot of limitations here. Every platform requires human capital to apply its localised content acumen to ensure content safety.”
Aqshara Content Solutions provides content – creation, curation, moderation and management services to some of India’s leading and upcoming platforms. The company was founded by Faiz Askari & Deepu Madhavan who share diverse experience in content and media.
As a content and media professional, Faiz Askari says, “Social media is at a threshold of gaining authenticity and content authority. The challenge which is faced by Twitter is very commonly faced by every social media platform. But those platforms which are taking the route of human interface for content moderation have far more sanitized content on their platform as compared with those who have a complete AI- tool driven backend. In the case of audio tweets, AI can never be a completely robust solution because of the versatility of content.”
The onus is now on Twitter to sort these things out while voice tweets are still in the testing phase and create a good mix of AI-human moderation to control what people utter via voice tweets on its platform before the users flood the micro-blogging platform with complaints.