One of the big concerns with live-streaming is that it’s happening in real-time, and as such, it can’t be censored till it’s too late. This has already caused various concerns. The highest profile case was when Philando Castile’s fiancé live-streamed the immediate aftermath of his shooting by a police officer, but there have been many others; Antonio Perkins was shot and killed while live-streaming on Facebook; a group of teenagers in Milwaukee live-streamed themselves having sex; a French woman broadcast her own suicide.
Because these events are happening live, and the platforms are open to the public, anyone can see them. Granted, Facebook has an age limit for members, but the wider expansion of live-streaming means more people, inevitably, are going to see such material, which puts impetus on the networks themselves to put some form of censorship in place to limit such exposure.
To solve this, Facebook says it’s turning to artificial intelligence.
According to Reuters, Facebook’s working on a tool which will automatically flag offensive material in live-streams. Joaquin Candela, Facebook’s director of applied machine learning, says they’ve developed an algorithm that can detect “nudity, violence, or any of the things that are not according to our policies.”
Facebook already uses automation to process the tens of millions of user reports it gets every week, so the extended use in this sense is no surprise - but in order for such a system to be effective, it needs to work in real-time, which would be a big step up for the technology, and could have widespread implications in future.
If it works, it could also help Facebook detect duplicate content and eliminate fake live streams and the airing of pay-per-view events, two other significant concerns for live-stream platforms.
AI tools are also being developed to detect harassment and abuse in real-time, which, if refined, could obviously have significant impacts.