Beyond photos, Instagram has also added a bullying comment filter to live videos as part of efforts to keep a fun platform for all users. New head of Instagram Adam Mosseri announced in a blog post that the app is now using machine learning to detect harassment in photos and captions, immediately sending those posts for review.
If a human moderator deems the photo is in breach of the platform's community guidelines, the photo will be removed, and the poster will be notified of its deletion and told why.
One of Instagram's new tools is bullying detection in captions and photos.
An Instagram spokesperson said its bullying classifier detects "attacks on a person's appearance or character, as well as threats to a person's well-being or health" in a photo.
For example, the tech can identify bullying tactics such as comparing, ranking and rating images and captions, such as a split-screen image in which a person is compared to someone else in a negative way.
The company, owned by Facebook, is also launching a "Kindness Camera Effect" in partnership with dancer Maddie Ziegler. Mosseri said this advancement, which is rolling out now and will continue to do so over the coming weeks, is helpful as many bullying victims and observers don't report it. If you switch to the rear camera, you see the word "kindness" in different languages. You can tag friends, too. Instagram stressed that the new tools will help the company in ensuring the safety of teen users whom it believes experience online bullying most of the time.
This feature works in the same vein to the anti-bullying comment filter that was rolled out earlier this year for Feed, Explore and Profile.
Why it matters: Bullying isn't exclusive to Instagram as it's prevalent on virtually every social network and messaging app. Users can also turn off comments on individual Instagram posts and block posts with certain keywords.