LLM services are being hit by hackers looking to sell on private info

Celebrity Gig

Using cloud-hosted large language models (LLM) can be quite expensive, which is why hackers have apparently begun started stealing, and selling, login credentials to the tools.

Cybersecurity researchers Sysdig Threat Research Team recently spotted one such campaign, dubbing it LLMjacking.

In its report, Sysdig said it observed a threat actor abusing a vulnerability in the Laravel Framework, tracked as CVE-2021-3129. This flaw allowed them to access the network and scan it for Amazon Web Services (AWS) credentials for LLM services.

New methods of abuse

“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers,” the researchers explained in the report. “In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted.”

READ ALSO:  Clearing agents dump land borders over rising charges, forex

The researchers were able to discover the tools that the attackers used to generate the requests which invoked the models. Among them was a Python script that checked credentials for ten AI services, analyzing which one was useful. The services include AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.

READ ALSO:  Apple services are very profitable and may be ready for takeoff again

They also discovered that the attackers didn’t run any legitimate LLM queries in the verification stage, but were rather doing “just enough” to find out what the credentials were capable of, and any quotas. 

In its news report, The Hacker News says the findings are evidence that hackers are finding new ways to weaponize LLMs, besides the usual prompt injections and model poisoning, by monetizing access to LLMs, while the bill gets mailed to the victim.

The bill, the researchers stressed, could be quite a big one, going up to $46,000 a day for LLM use.

“The use of LLM services can be expensive, depending on the model and the amount of tokens being fed to it,” the researchers added. “By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, disrupting business operations.”

More from TechRadar Pro

Categories

Share This Article
Leave a comment