Cybercriminals struggling to adopt AI in their work, research suggests

Research analyzing 100 million posts from cybercrime communities suggests that cybercriminals struggle to use AI effectively in their work. The study found that AI coding assistants are mostly useful for skilled cybercriminals, and guardrails on major chatbots are reducing harm.
Cybercriminals are struggling to adopt AI in their work, according to research analyzing 100 million posts from underground and dark web cybercrime communities. Researchers from the universities of Edinburgh, Strathclyde, and Cambridge found that most cybercriminals lack the skills or resources to use AI innovation. The study used machine learning tools and manual sampling techniques to analyze conversations from November 2022 onwards, following the release of ChatGPT. AI was used most successfully for running social media bots and hiding patterns detectable by cybersecurity defenders. The researchers warn that the main risks to industry come from adopting poorly secured AI systems, which can be exploited by cybercriminals with little effort or skill. The findings will be presented at the Workshop on the Economics of Information Security in Berkeley, US, in June.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.