Artificial Intelligence

AI data 'poisoning' requires greater attention

Asia / China1 views1 min
AI data 'poisoning' requires greater attention

Experts warn that 'data poisoning' through generative engine optimization (GEO) can mislead large AI models, and call for regulations and improved data quality to prevent this. China's provisional measures for managing generative AI services are seen as a step in the right direction, but more needs to be done.

China Media Group revealed how generative engine optimization (GEO) can be used to feed false information into large AI models. Algorithmic bias originates from human biases amplified by technology, and GEO-related businesses exploit this vulnerability. Large AI models rely on online searches, which can be 'poisoned' by mass-publishing false information. To prevent this, a defensive line covering data sources, model training, and other sectors is needed. Model developers should prioritize credible data and implement a certification system. China's 2023 document on managing generative AI services is a start, but more regulations are needed to address GEO 'poisoning'. AI platforms should establish mechanisms to improve traceability and issue alerts when detecting GEO 'poisoning'. New businesses are emerging to tackle data pollution, including data quality certification and credibility assessment of AI-generated content.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...