Unitary, an EF alumnus, raises £1.3M seed for its content moderation AI – TechCrunch


Unitary, a startup that’s developing AI to automate content moderation for “harmful content” so that humans don’t have to, has picked up £1.35 million in funding. The company is still in development mode but launched a trial of its technology in September.

Led by Rocket Internet’s GFC, the seed round also includes backing from Jane VC (the cold email-friendly firm backing female-led startups), SGH Capital, and a number of unnamed angel investors. Unitary had previously raised pre-seed funding from Entrepreneur First, as an alumnus of the company builder program.

“Every minute, over 500 hours of new video footage are uploaded to the internet, and the volume of disturbing, abusive and violent content that is put online is quite astonishing,” Unitary CEO and co-founder Sasha Haco, who previously worked with Stephen Hawking on black holes, tells me. “Currently, the safety of the internet relies on armies of human moderators who have to watch and take down inappropriate material. But humans cannot possibly keep up”.

Not only is the volume of content uploaded ever-increasing, but the people employed to moderate the content on platforms like Facebook can suffer greatly. “Repeated exposure to such disturbing footage is leaving many moderators with PTSD,” says Haco. “Regulations are responding to this crisis and putting increasing pressure on platforms to deal with harmful content and protect our children from the worst of the internet. But currently, there is no adequate solution”.

Which, of course, is where Unitary wants to step in, with a stated mission to “make the internet a safer place” by automatically detecting harmful content. Its proprietary AI technology, which uses “state of the art” computer vision and graph-based techniques, claims to be able to recognise harmful content at the point of upload, including “interpreting context to tackle even the more nuanced videos,” explains Haco.

Meanwhile, although there are already several solutions offered to developers that can detect restricted content that is more obvious, such as explicit nudity or extreme violence (AWS, for example, has one such API), the Unitary CEO argues that none of these are remotely good enough to “truly displace human involvement”.

“These systems fail to understand more subtle behaviours or signs, especially on video,” she says. “While current AI can deal well with short video clips, longer videos still require humans in order to understand them. On top of this, it is often the context of the upload that makes all the difference to its meaning, and it is the ability to incorporate contextual understanding that is both extremely challenging and fundamental to moderation. We are tackling each of these core issues in order to achieve a technology that will, even in the near term, massively cut down on the level of human involvement required and one day achieve a much safer internet”.

Speak Your Mind

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Get in Touch

350FansLike
100FollowersFollow
281FollowersFollow
150FollowersFollow

Recommend for You

Oh hi there 👋
It’s nice to meet you.

Subscribe and receive our weekly newsletter packed with awesome articles that really matters to you!

We don’t spam! Read our privacy policy for more info.

You might also like

Trump Wants Vote On Supreme Court Replacement For Ruth...

Topline President Trump Saturday vowed to move forward with the selection of a new...

Netflix crushes expectations with beat of new subscriber estimates

Netflix exceeded analysts' estimates on paid new subscribers for the first quarter on Tuesday...

The Fashion Industry Just Outlined A Vision For The...

In the last decade alone, we've seen rapid change in the way we eat,...

Tarform unveils Luna e-moto for folks who may not...

Brooklyn-based EV startup Tarform unveiled its Luna electric motorcycle in New York last week...