Elon Musk’s AI venture, xAI, is facing fresh scrutiny after its newly launched Grok 4 chatbot appeared to consult Musk’s own social media posts when tackling sensitive topics like the Israel-Palestine conflict, abortion rights, and U.S. immigration laws. The issue came to light after several users noticed Grok explicitly mentioning “searching for Elon Musk views” in its chain-of-thought reasoning.
This latest controversy highlights an increasingly awkward question about AI neutrality and bias: if your AI assistant is essentially parroting the views of its billionaire owner, can it really claim to be “maximally truth-seeking” as Musk himself insists? From the looks of it Grok might be a little too eager to follow the boss’s lead.
I replicated this result, that Grok focuses nearly entirely on finding out what Elon thinks in order to align with that, on a fresh Grok 4 chat with no custom instructions.https://t.co/NgeMpGWBOB https://t.co/MEcrtY3ltR pic.twitter.com/QTWzjtYuxR
— Jeremy Howard (@jeremyphoward) July 10, 2025
While it’s true that reasoning models like Grok often generate scratchpad-style summaries of their internal processes, seeing direct references to Musk’s views when asked controversial questions feels… loaded. AI models are already under immense scrutiny for how they handle delicate social and political subjects, and building alignment around a single person’s worldview especially one as polarizing as Musk doesn’t help the cause
To complicate matters, xAI hasn’t released a system card for Grok 4, the transparency report that details how an AI model was trained and aligned. OpenAI, Google DeepMind, and Anthropic have made these standard for frontier models. Without it, it’s impossible to independently verify how Grok arrives at its conclusions whether it’s an algorithmic echo of Musk or a balanced aggregation of diverse viewpoints.

It’s worth noting that Grok 4 did manage to outperform major models like GPT-4o and Gemini 1.5 Pro on key benchmarks, proving it’s no lightweight in capability. But that technical success has been overshadowed by recent disasters, including an antisemitic incident where Grok’s automated replies went off the rails and had to be scrubbed. When your AI is calling itself “MechaHitler,” you’ve officially lost the room.
For me, the bigger issue isn’t the occasional rogue reply it’s that Grok seems structurally inclined to lean Musk’s way. If AI is going to reshape how we access information, the last thing we need is a digital yes-man. Unless xAI makes serious moves to improve transparency and alignment, it risks losing both public trust and enterprise buy-in.
