Grok AI’s Share Feature Exposed Thousands of User Chats on Google

[ad_1]

eWeek content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More

Elon Musk’s artificial intelligence company, xAI, has drawn scrutiny after Google indexed hundreds of thousands of public conversations with its chatbot, Grok. The exposure happened through Grok’s “share” button feature, which lets users generate a unique link to send a transcript of their chats. But instead of being private, these shared conversations were also indexed by search engines including Google, Bing, and DuckDuckGo.

Forbes reported that Google alone had indexed more than 370,000 Grok conversations. The transcripts included harmless exchanges like tweet drafts, but also highly sensitive material such as medical questions, personal details, and passwords.

Table of Contents

Disturbing content among leaked chats

Beyond everyday prompts, some indexed conversations exposed alarming material. According to Forbes, Grok provided detailed instructions on how to manufacture fentanyl, make explosives, and even offered a plan for the assassination of Elon Musk.

These results surfaced despite xAI’s stated policy banning the promotion of harm, bioweapons, or other illicit activities.

Some users were surprised to learn their shared conversations had been made public. British journalist Andrew Clifford, who had used Grok to draft tweets, told Forbes: “I would be a bit peeved, but there was nothing on there that shouldn’t be there.” He has since switched to using Google’s Gemini AI.

Others, including AI researchers, expressed concern. Nathan Lambert of the Allen Institute for AI said he was unaware that sharing internally with colleagues would also make chats searchable.

“I was surprised that Grok chats shared with my team were getting automatically indexed on Google, despite no warnings of it,” he told Forbes.

Not the first AI chatbot privacy scare

This is not the first time chatbot conversations have leaked beyond user expectations.

OpenAI’s ChatGPT recently faced backlash after some shared ChatGPT conversations were indexed by Google. The company quickly rolled back the feature, calling it “a short-lived experiment” and admitting it posed risks for unintentional data exposure.

Meta’s AI chatbot also came under fire earlier this year when shared chats appeared in a public “discover” feed.

Musk previously mocked OpenAI’s issue, posting “Grok ftw” [for the win] on X when ChatGPT chats appeared in search. Now, his own AI startup faces similar criticism.

The lesson for AI users

AI users should treat every interaction with a chatbot as a potential public post and never to share personally identifiable information. Until companies build stronger safeguards and clearer warnings into their AI tools, the safest approach is to assume anything shared with a chatbot could one day end up in a search result.

This isn’t the first time Grok sparked controversy. Grok’s AI video tool recently faced intense criticism for producing non-consensual deepfake clips of celebrities.

[ad_2]

Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment