Companies Just Learned a Brutal Lesson About Training AI to Do Human Jobs

This image was generated by AI and may not depict real events.
Mercor, a San Francisco-based AI company, was hacked, exposing sensitive information from its clients, including OpenAI and Meta. The breach has led to lawsuits against Mercor and raised concerns about the security of AI supply chains.
Mercor, a San Francisco-based AI company, has been hiring underemployed experts to train AI models. The company was hacked, exposing sensitive information from its clients, including OpenAI and Anthropic. The breach was linked to an exploit in an open-source project called LiteLLM. Stolen data included Slack conversations and videos of interactions between Mercor's AI systems and workers. Contractors have filed five lawsuits against Mercor, alleging data privacy and consumer protection law violations. Meta has paused work with Mercor during its investigation into the security incident. The breach highlights the dangers of relying on underpaid and overworked contractors to train valuable AI models.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.