X pledges quicker action on hate and terror content in the UK

X (formerly Twitter) has agreed to review UK reports of suspected illegal hate and terror content within 24 hours on average, following pressure from Ofcom amid rising religiously-motivated crimes. The platform will also submit performance data quarterly and engage with experts to improve reporting systems, while restricting access to accounts linked to UK-proscribed terrorist organizations.
Social media platform X has committed to reviewing UK reports of suspected illegal hate and terrorist content within an average of 24 hours, under new agreements with the UK regulator Ofcom. The pledge applies to content flagged through X’s illegal content reporting tool, with the company also promising to assess at least 85% of reports within 48 hours. Ofcom’s online safety director, Oliver Griffiths, described the commitments as a "step forward," particularly after recent religiously-motivated attacks targeting Jewish communities in the UK, including the Heaton Park Synagogue attack in Manchester in October 2025 and incidents in Golders Green and London. The commitments follow Ofcom’s compliance program, launched in December, which evaluates whether major platforms have adequate systems for handling illegal hate and terror material. Griffiths stated that such content persists on large social media sites, prompting Ofcom to push for stronger action. X will now provide Ofcom with performance data every three months for a year to track compliance. In addition to faster reviews, X has agreed to two further measures: consulting with experts to improve reporting systems after concerns that some organizations were unclear whether their flags were received or acted upon. The second commitment involves restricting UK access to accounts reported for posting illegal terrorist content if they are linked to UK-proscribed terrorist organizations. Reactions from advocacy groups were mixed. Danny Stone of the Antisemitism Policy Trust called the move a "good start" but criticized X’s broader failure to address open racism. Iman Atta of Tell Mama, which tracks anti-Muslim incidents, welcomed the stricter targets, emphasizing that accountability depends on delivery rather than promises alone. The commitments come as Ofcom continues an ongoing investigation into X’s AI tool Grok over concerns it was used to generate sexualized images.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.