Technology

OpenAI did not respect Canadian privacy laws in developing ChatGPT, probe finds

North America / Canada0 views1 min
OpenAI did not respect Canadian privacy laws in developing ChatGPT, probe finds

Federal and provincial watchdogs found OpenAI failed to respect Canadian privacy laws when training its ChatGPT chatbot. The probe concluded that OpenAI's collection of information was overly broad, resulting in the compilation and use of sensitive personal details.

OpenAI failed to respect Canadian privacy laws when training its ChatGPT chatbot, federal and provincial watchdogs have found. The conclusion came in a report on a joint investigation by federal privacy commissioner Philippe Dufresne and his counterparts from British Columbia, Alberta, and Quebec. ChatGPT, released in November 2022, is a popular conversation-style tool that responds to online users' prompts with a wide range of information almost instantly. The privacy watchdogs found OpenAI's collection of information to train its models was overly broad, resulting in the compilation and use of sensitive personal details, including data about individuals' health conditions and political views, as well as information concerning children. OpenAI did not clearly explain that personal information collected from publicly accessible sources could include data from social media, discussion forums, and other similar websites. The regulators said OpenAI provided inadequate notifications about potential inaccuracies in ChatGPT responses and did not provide an easily accessible mechanism to access, correct, and delete personal information.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...