Cyber Security

Xai Dev leaks private SpaceX, Tesla LLMS API keys – Security

Elon Musk’s AI employee xai A private key leaked github In the past two months SpaceX,,,,, Tesla and Twitter/x, Crabson learned safety.

Image: Shutterstock, @sdx15.

Philippe Caturegli“Chief Hacker Officer” of security consulting firms Seralysis the first person to publicize the leak of X.AI application interface (API) in the GitHub code repository of XAI’s technicians.

Caturegli’s post on LinkedIn caught the attention of researchers Gitguardiana company specializing in researching and repairing secrets exposed in public and proprietary environments. GitGuardian’s system continuously scans API keys in GitHub and other code repositories and fires automatic alerts to affected users.

Gitguardian Eric Fourier Tell Crabson about security that exposed API keys have access to several unissued models Grokean AI chatbot developed by Xai. Overall, Gitguardian found that keys have access to at least 60 different datasets.

“These credentials are available for accessing the X.AI API,” Gitguardian wrote in an email. “The associated accounts have access to not only public Grok models (GROK-2-1212, etc.), but also seemingly unreleased (Grok-2.5V), development (Research-Grok-2P5V-1018) and private models (Tweet-Rejector, Grok-Spacex-Spacex-2024-11-04).

Fourrier discovered that Gitguardian had issued a reminder to XAI employees about the exposed API keys nearly two months ago – but as of April 30, Gitguardian directly reminded Xai’s security team that the key was still valid and available. Xai tells Gitguardian to report this through its Bug Bounty program hackeronebut after only a few hours, the repository containing the API keys is removed from GitHub.

“It seems that some of these internal LLMs have fine-tuned on SpaceX data, and some have fine-tuned on Tesla data,” Fourrier said. “I definitely don’t think the Grok model that fine-tuned on SpaceX data is intended to be publicly exposed.”

Xai did not respond to a request for comment. The 28-year-old XAI technician also did not reveal his keys.

Carole Winqwist Gitguardian’s research team is responsible. Winquist said free access to private LLM to potentially hostile users is the secret to disaster.

“If you are an attacker and have direct access to the backend interface of the model and things like Grok, you can definitely use further attacks,” she said. “The attacker can use to prompt injections, tweak (LLM) the model to achieve its purpose, or try to implant code into the supply chain.”

The unintentional exposure of XAI’s LLM is what Musk calls it Ministry of Government Efficiency (Doge) has been feeding sensitive government records into artificial intelligence tools. February, Washington Post DOGE officials reportedly provide data from the entire education sector to AI tools to investigate agencies’ plans and spending.

The post says Doge plans to replicate this process in many departments and agencies, access backend software in different regions of the government, and then use AI technology to extract and filter information about employees and planned spending.

“Feeding sensitive data into AI software and categorises it as owned by system operators increases the chances it will leak or sweep in a cyber attack,” Post Reporters wrote.

Wired reported in March that Doge has deployed a proprietary chatbot called GSAI to 1,500 federal workers General Service Managementpart of the effort is to automate tasks previously done by humans, as Doge continues to clear the federal labor force.

one Reuters Last month’s report said Trump administration officials told some U.S. government employees that Doge is using AI to monitor communications of at least one federal agency’s hostility to President Trump and his agenda. Reuters wrote that the Governor’s team had deployed Musk’s Grok AI Chatbot extensively as part of their efforts to cut federal government, although Reuters said it was unable to determine exactly how to use Grok.

Caturegli said that while there is no indication that federal government or user data can be accessed through naked X.AI API keys, these private models may be trained on proprietary data and may inadvertently reveal details related to internal development efforts of XAI, Twitter, or SpaceX.

“The key was exposed publicly for two months and allowed the use of internal models,” Caturegli said. “This long-term persistent credential exposure highlights weak key management and insufficient internal monitoring, raising questions about developer access and wider operational security.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button