If it’s NSFW, it’s Not Safe for Your Employees to Share


Security

By | 23/03/2021

Security

If it’s NSFW, it’s Not Safe for Your Employees to Share

NSFW Content Blog Featured Image

Is NSFW visual content proliferating in your environment?

While the internet has grown phenomenally over the years to add significant value to global business, for enterprise risk, legal and cyber security teams, it is remiss to ignore its darker side.

It plays an enabling role in bringing not safe for work (NSFW) images and inappropriate videos into the work environment.

With unsuitable content hiding just a scratch beneath the surface, and often hosted on mainstream platforms, coupled with the native sharing capabilities built into most online interactions now, it is no surprise it proliferates.

What is NSFW content, and why is it an issue?

It is not just erotic content. In today’s polarised political landscape, everything from sensitive gender and race issues to extremist content is highly accessible through a patchwork of platforms.

Some of these would have traditionally been marginalised but have become increasingly mainstream. Many are hosted on social networks which take a ‘hands-off’ approach to content moderation.  Either way, it is harder to block by domain.

The presence of such grey areas can encourage a culture of NSFW content sharing between employees to take root on everything from work email systems to cloud applications.

This is not a hypothetical scenario. In our recent survey of cyber security professionals during lockdown, 21% reported that they had caught employees visiting adult sites at work. A further 21% said they had caught employees trying to bypass web security to look at blacklisted websites.

Empowering-The-People-Critical-Security-Challenges-2020-Report_Page_01

See the Full Results

This report provides contemporary insight into the challenges faced here-and-now, alongside practical advice to help security teams provide a safe and flexible environment that empowers people and organisations to meet the challenges of today.

The risk of NSFW content to companies is clear.  In many countries, employers can be held vicariously liable for the actions of their employees, putting them at risk of litigation.

In the UK, the Obscene Publications Act means that a business can be searched by warrant for obscene materials. In UK employment law, an employer is liable for the acts of its employees, which means that directors and officers of companies can be held to account for illegal content employees are accessing on company-provided devices.

Similarly, if an employee accesses content that is deemed offensive to another colleague on a work device, the company may be liable under the Protection from Harassment Act (1997) in UK or equivalent legislation in other geographies, such as Sexual Harassment Amendment (2002) in the EU.

As a defence against this, companies must be able to prove they have taken every possible step to protect their people in advance from a hostile working environment.

The risk isn’t just legal, either. NSFW content of all types comes loaded with the kind of negative perception that can easily trash a brand’s carefully curated image, should it find its way into the public domain. Such reputational issues make good media fodder.

Are you paying for employees to store NSFW content in the cloud?

Alongside these already significant issues, NSFW content makes bad business sense as employers can also be paying to store questionable employee data in the corporate cloud.

This problem arises when employees use unstructured sync and share applications, which can lead to unsuitable content being uploaded into cloud storage servers.  A recent Veritas report found that 62% of employees use such services.

Even more interestingly, 54% of all data is ‘dark’, which means it is unclassified and invisible to administrators. With video using the most storage, this could lead to a significant extra cost for the maintenance of dubious content.

How do I address the problem of NSFW content?

Companies looking to mitigate the risk from inappropriate image and video sharing should look to address both the human and technological elements.

Education and awareness training is a good way to help employees realise the responsibilities they have to foster a work environment which is respectful of colleagues and legally compliant. Workplace policies around inappropriate sharing should also be laid out very clearly in employee handbooks and reinforced where necessary.

From a technical standpoint, image and video content analysis should be added on to the main platforms which employees use for accessing and sharing enterprise information. A good starting point is scanning email content and attachments using a contemporary email security solution for inappropriate and harmful image or video content.

Image content analysis technology can also be integrated into web filtering, doing this will allow policies to be set up to track and prevent access to adult, offensive and extremist content through the browser.

In the modern cloud-first environment, especially post-COVID, it also means securing the full range of cloud applications by adding image and video analysis with a CASB. Doing so will allow scanning of all files shared to cloud storage applications such as Dropbox, OneDrive and Google Drive.

Censornet’s Image content analysis tools incorporate deep learning technology to enhance detection capabilities and eradicate false positives. Applied across multiple channels and intelligently updated with ML, an Acceptable Use Policy can be delivered at scale, without the need for human moderation – which can otherwise become a time-consuming process, all with a full audit trail.

By securing the transmission mechanisms for video and images in this way, NSFW imagery can be detected and controlled before it is shared on internal systems. On a tactical level, this reinforces positive culture and highlights users abusing company channels. More strategically, it reduces reputational risk and the chance of litigation.

Ultimately, as with anything involving human risk, stopping the user taking the action is the best defence. However, people are unpredictable and unreliable in a security context. To ensure mitigation of the issue at scale and in a reliable manner, automation is crucial.

LIKE THIS ARTICLE? SHARE IT. linkedintwitter