Common Sense Media creates independent watchdog to test AI safety for kids

Common Sense Media has launched the Youth AI Safety Institute to test AI safety for kids, with independent testing and evaluation of AI products used by young people. The institute aims to hold AI developers accountable for the safety of their platforms and create a 'race to the top' among AI companies.
Common Sense Media has launched the Youth AI Safety Institute to address the potential damage AI could cause to children. The institute will provide independent testing and evaluation of AI products used by young people. The findings will be shared with families clearly and simply, and AI developers will be held accountable for the safety of their platforms. The institute is the first standalone entity under Common Sense Media, born from years of previous studies on how kids and families experience AI. Common Sense Media has published AI risk assessments, found widespread chatbot use by teenagers, and warned of risks to youth mental health. The organization is working with AI developers, including Anthropic and OpenAI Foundation, and meeting with CEOs of companies like Apple, Google, and Microsoft.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.