Top AI companies commit to child safety principles as industry grapples with deepfake scandals

Free
April 24, 2024 United States, Washington, Wishram 6

Description

After a series of highly publicized scandals related to deepfakes and child sexual abuse material (CSAM) have plagued the artificial intelligence industry, top AI companies have come together and pledged to combat the spread of AI-generated CSAM.  Thorn, a nonprofit that creates technology to fight child sexual abuse, announced Tuesday that Meta, Google, Microsoft, CivitAI, Stability AI, Amazon, OpenAI and several other companies have signed onto new standards created by the group in an attempt to address the issue. At least five of the companies have previously responded to reports that their products and services have been used to facilitate the creation and spread of sexually explicit deepfakes featuring children.    AI-generated CSAM and deepfakes have become a hot-button issue in Congress and beyond, with reports detailing stories of teenage girls victimized at school with AI-generated sexually explicit images that feature their likenesses.  NBC News previously reported that sexually explicit deepfakes with real children’s faces were among top search results for terms like “fake nudes” on Microsoft’s Bing, as well as in Google search results for specific female celebrities and the word “deepfakes.” NBC News also identified an ad campaign running on Meta platforms in March 2024 for a deepfake app that offered to “undress” a picture of a 16-year-old actress. The new “Safety by Design” principles the companies signed onto, pledging to integrate them into their technologies and products, include proposals that a number of the companies have already struggled with.  Visit our website for more details: https://nycdepartmentoffinance.powerappsportals.us/forums/general-discussion/9dc122ed-f101-ef11-a73d-001dd8305ba3


Share by email Share on Facebook Share on Twitter Share on Google+ Share on LinkedIn Pin on Pinterest