Jump to Content
Security & Identity

Testing your LLMs differently: Security updates from our latest Cyber Snapshot Report

August 20, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/GettyImages-2156385097.max-2600x2600.jpg
Christian Elston

Senior Security Consultant, Mandiant, part of Google Cloud

Jennifer Guzzetta

Product Marketing Manager, Mandiant, part of Google Cloud

Join us at Google Cloud Next

Early bird pricing available now through Feb 14th.

Register

Web-based large-language models (LLM) are revolutionizing how we interact online. Instead of well-defined and structured queries, people can engage with applications and systems in a more natural and conversational manner — and the applications for this technology continue to expand.

While LLMs offer transformative business potential for organizations, their integration can also introduce new vulnerabilities, such as prompt injections and insecure output handling. Although web-based LLM applications can be assessed in much the same manner as traditional web applications, in our latest Cyber Snapshot Report we recommend that security teams update their approach to assessing and adapting existing security methodologies for LLMs.

Beware of prompt injections

An LLMs ability to accept non-structured prompts, in an attempt to “understand” what the user is asking, can expose security weaknesses and lead to exploitation. As general purpose LLMs rise in popularity, so do the number of user instances prompting the LLM to disclose sensitive information like usernames and passwords. Here is an example of this type of sensitive information disclosure from prompt injection:

https://storage.googleapis.com/gweb-cloudblog-publish/images/1_qSLIWtI.max-1500x1500.png

A word on probability 

Traditional web applications are typically deterministic. For any given input, the output of the application is reasonably guaranteed to be consistent (for example, 2 + 2 = 4). On the other hand, web-based LLMs are not deterministic but rather probabilistic. This is inherently due to an LLMs key objective taking shape as the attempt to mimic “understanding” of unstructured inputs. 

Even with the same input, the LLMs responses will begin to differ: The user is not guaranteed the same output every time. Here’s an example:

https://storage.googleapis.com/gweb-cloudblog-publish/images/2_wMtKKHP.max-1000x1000.png

Incorporating probabilistic testing can help provide better evaluation and protection against prompt injection, excessive agency, and overreliance. When it comes to prompt injections in particular, practitioners should identify what prompt and context were provided to the LLM when the vulnerability was discovered. Keeping this probabilistic nature in mind when assessing web-based LLMs will benefit the security professional on both offensive and defensive fronts.

Learn more

To learn more, read the latest issue of our Cyber Snapshot Report. In this report, we also dive into deploying deception strategies that combat adversaries targeting you, considerations for migrating from a legacy to a leading-edge SIEM platform, defending against the growing attacks on cloud-first organizations, and mitigating insider threats with the help of proactive penetration testing. You can read the full report here.  

Posted in