Screening social media: How can we keep children from inappropriate content?

screening social media, inappropriate online content
© Bigtunaonline

In this article, Alastair Graham discusses the need for screening social media; touching on infamous cases, government responsibilities and questions of immaturity

Social media websites and applications have transformed communication; often in hugely positive ways. Users can share content and engage and interact with others all over the world with a few simple clicks or taps.

Concerns have rightly arisen, however, over the nature of that content and those interactions – particularly when the users in question are children or young people.

The tragic cases of Molly Russell and Sophie Moss, two teenage girls who took their own lives after viewing content related to self-harm on Facebook and Instagram, underline the appalling potential consequences of unfettered youth access to social media.

It is perhaps unsurprising that when surveyed 1,500 parents about internet safety and age verification, we found that social media sites came second only to those containing pornographic material in terms of causing concern to parents; 71% of respondents cited such fears.

The question is – how can these risks best be alleviated?

Government intervention

The government is working to reduce the possible harms of social media for children and young people, though it has proven a tricky landscape to navigate. Discussions around who is ultimately liable for the content hosted on social media sites have been complex, with stakeholders still disagreeing over whether these sites should be considered content publishers, platforms, or something else.

Section 103 of the upcoming Digital Economy Act, due to come into effect from April 2019, mandates a code of practice for providers of online social media platforms by the Secretary of State, though this seems yet to be specified. More recently, the government has confirmed that it will place social media firms under a new statutory duty of care, and establish a regulator which will ‘spot and minimise the harms social media presents, as well as take enforcement action when warranted’.

According to the Telegraph, which campaigned for this duty of care, requirements for social media firms will include ‘reasonable and proportionate action to protect children from harmful content’ and easy-to-use complaints procedures, whilst sanctions could include fines tied to a company’s annual turnover.

It certainly all sounds like moves in the right direction, but until these procedures are up and running it is difficult to predict precisely what steps social media providers will be required to take, and where they will be held to account. What does ‘reasonable and proportionate action’ constitute?

The age question

One element which seems to be missing from these discussions is the question of age verification. Facebook, Instagram (owned by Facebook), Snapchat and Twitter all require users to be 13 or over, but their effectiveness in policing the age of users remains questionable. Whilst creating an account with false information is a violation of the sites’ terms and can be reported, a recent Ofcom report found that 18% of eight to 11-year-olds had a social media profile. The figure rose to a hefty 69% amongst 12 to 15-year-olds. It seems sensible to assume that a significant number of children are managing to comfortably bypass the sites’ age controls.

a recent Ofcom report found that 18% of eight to 11-year-olds had a social media profile

Clearly, some responsibility must lie with parents. Parental control and tracking apps such as Screen Time, and advice provided through sites such as GetSafeOnline, all have a valuable role to play in helping parents to actively limit the number of underage individuals who set up social media profiles.

But social media providers themselves can also work harder to prevent underage users from setting up accounts in the first place. Their challenge is doing so without creating an overly complex sign-up procedure for new users who are of an appropriate age – who, after all, make up the vast majority of users overall.

Innovative tech tools are designed to strike this balance. They require users to verify their age using reliable evidence such as driving licenses or credit card statements – but, crucially, users only have to go through this process once. From there, they generate a single age-verified account, which can be used to access any website which signs up to the tool. Sign into their account and they gain instant access to the site in question – which could just as easily be an age-restricted retailer or entertainment service as a social media application.

The question of whether social media providers need to take greater responsibility for young people’s wellbeing has been answered with a resounding yes. The onus is now on those providers to take a demonstrable and proactive approach in delivering such protections – and a more sophisticated approach to age verification would be a great place to start.

 

Alastair Graham

 

CEO

AgeChecked

Call 116 123 to speak to a Samaritan

LEAVE A REPLY

Please enter your comment!
Please enter your name here