Elon Musk’s AI venture, xAI, has quietly missed its own deadline to publish a finalized AI safety framework, drawing fresh criticism from watchdog group The Midas Project. The company originally pledged to release an updated framework by May 10, following a draft presented at the AI Seoul Summit in February but the date came and went without public acknowledgment.

Concerns over xAI’s commitment to safety aren’t new. A report revealed that its chatbot, Grok, would undress images of women on request and regularly uses crude language compared to rivals like ChatGPT and Gemini. The draft framework itself was vague, applying only to future AI models and omitting how xAI would implement risk mitigation a requirement it agreed to at the Summit.

Worryingly, xAI isn’t alone. Major AI players like OpenAI and Google DeepMind have also faced backlash for rushed safety practices as AI capabilities and potential risks continue to accelerate worldwide.
