mandiant logo

Cyber Snapshot Report

Google Cloud Security

red building

Contents

The Defender's Advantage Cyber Snapshot report delivers practical insights into critical and emerging cybersecurity defense topics. These insights are derived from Mandiant's direct, frontline observations and real-world experiences responding to and analyzing cyberattacks.

coding

The rise of generative AI (artificial intelligence) over the last year propelled multi-purpose large language models (LLMs) into the public eye as well. Organizations are now racing to harness the capabilities of LLMs to advance their customer service, task optimization, content creation, and overall productivity.

With the rapid growth of AI-enabled applications, such as LLMs, comes the heightened risk of associated vulnerabilities. Organizations aim to be first-to-market, but unfortunately this often pushes the prioritization of security to the back burner.

However, like every new application, best practice calls for assessment of its surrounding security measures. While there are similarities to traditional application assessment approaches, there are also nuanced differences that should be considered when designing, building, and assessing AI-enabled applications across the board.


The Intrigue of LLMs

LLMs are large neural networks that are trained on extensive datasets. Depending on the type of dataset the LLM is trained on, it can be customized to produce a variety of tasks including text generation, translation and summarization, and general question and answering.

Most organizations deploy LLMs in a web based fashion, which allows customers and users to interact with the LLM in a familiar format (via a web application) and also gives developers the ability to harness the LLM in a well-defined manner through the use of exposed web application programmable interfaces (APIs). 

LLMs routinely achieve general purpose language generation and mimic the ability to “understand” users and their tailored queries. This allows web-based LLMs to be uniquely suited for enhanced user experience objectives including chat bots, search engines, and virtual assistants.


The Characteristics of LLMs

There are four characteristics to all general purpose web-based LLMs that make them desirable.

First is the prompt: a query provided by the end user via the front end of the application that helps to set the context of the LLM’s interaction with the user and guide the nature of the conversation. The more specific the prompt, the more capable the LLM is to leverage its dataset when answering the user’s question.

Second is exposed APIs: LLMs expose an API interface deliberately for consumption by the end application. These APIs allow traditional web applications to harness the capabilities of the LLM.

Third is the data: The data supplies the model with the information it needs to help answer the user’s prompt. Quality matters, as quantity and specificity enables the LLM to better process the user’s query and develop an answer that is both helpful and accurate to the user.

Fourth is the training set: This is the data the model was initially trained on to make it suitable for its ultimate purpose. Yet again, quality matters.


Top 10 LLM Vulnerabilities

As web applications continue to evolve and grow in ubiquity, unfortunately so too have the number and types of vulnerabilities associated with them. The Open Web Application Security Project (OWASP) tabulates the number and type of web application vulnerabilities reported publicly, and every few years releases a top 10 ranking of the most common vulnerabilities

In a similar manner, a new OWASP top 10 list has been compiled for LLM-specific applications:

LLM01: Prompt Injection

LLM06: Sensitive Information Disclosure

LLM02: Insecure Output Handling

LLM07: Insecure Plugin Design

LLM03: Training Data Poisoning

LLM08: Excessive Agency

LLM04: Model Denial of Service

LLM09: Overreliance

LLM05: Supply Chain Vulnerabilities

LLM10: Model Theft

LLM01: Prompt Injection

LLM06: Sensitive Information Disclosure

LLM02: Insecure Output Handling

LLM07: Insecure Plugin Design

LLM03: Training Data Poisoning

LLM08: Excessive Agency

LLM04: Model Denial of Service

LLM09: Overreliance

LLM05: Supply Chain Vulnerabilities

LLM10: Model Theft

To any seasoned practitioner of application security, many of these vulnerabilities should look familiar. Insecure Output Handling, Model Denial of Service, Supply Chain Vulnerabilities, Sensitive Information Disclosure, Insecure Plugin Design, and Model Theft are not unique to LLMs themselves; instead they are weaknesses due to the architecture of deployment on familiar web-based applications and systems. The positive takeaway from this is that many of the current application methodologies that have been defined over the last 20 years remain applicable, and the foundational principles of those methodologies are still guiding. 


In Conclusion

Web-based LLMs have revolutionized the way users interact with traditional systems; well-defined and structured queries are no longer required. Instead users can engage with applications and systems in a more natural and conversational manner—and the applications for this technology continue to expand as human curiosity and inventiveness evolve.

Web-based LLM applications may be assessed in much the same manner as traditional web applications. Controls like output encoding, authentication and authorization, data protection, and API security are well-defined from many years of application security testing and exploitation.

Moving forward, security professionals should evolve their application security assessment methodology to include probabilistic testing strategies. Incorporating probabilistic testing can assist in better evaluation and protection of web-based LLM applications against prompt injection, excessive agency, and overreliance to maintain an up-to-date and resilient security approach.

From Legacy to Leading-Edge: A Guide to SIEM Migration

woman

In 2024, security information and event management (SIEM) systems are still the backbone of most security operations centers (SOCs). SIEMs have always been used for collecting and analyzing security data from across organizations to help identify, investigate, and respond to threats—but the reality is that today’s SIEMs have little resemblance to those built over 15 years ago. Modern-day SIEMs include cloud-native architecture and capabilities such as user entity and behavior analysis (UEBA); security orchestration, automation, and response (SOAR); attack surface management; and artificial intelligence (AI), to name a few.

Legacy SIEMs are often slow, cumbersome, and difficult to use. Their legacy architecture often prevents them from scaling to ingest high-volume log sources, and they may be unable to keep up with the latest threats or support the latest features and capabilities. They may not offer the flexibility to support your organization's specific requirements or be suited to the multi-cloud strategy that is the reality for most organizations today. Finally, they may be poorly positioned to take advantage of the latest technological developments, like AI.


The Emergence of SIEM Migration

Recently, tectonic shifts in the SIEM space have been introduced that cannot be understated. These developments will accelerate migration from legacy SIEM platforms to modern ones, leaving many organizations with the reality of when they should migrate versus if they should. 

Major moves in the SIEM market across the last year: 

September 2023:

  • Cisco announced its intent to acquire Splunk

May 2024:

  • Crowdstrike introduced a new SIEM offering
  • Exabeam and LogRhythm announced intent to merge
  • IBM sold its Qradar cloud business to Palo Alto Networks and halted new development for Qradar on-prem customers


Google Cloud Security experts have observed and guided hundreds of SIEM migrations over multiple decades, giving us insight into the best practices that can be applied by organizations of all sizes. So, let's take stock of the top SIEM migration tips for 2024.


Selecting a New SIEM

When evaluating a new SIEM solution, start by asking yourself and your security team the following questions to help uncover the strengths and weaknesses of each SIEM option and whether or not it will meet the needs of your organization:

1. Cloud-Native SIEM

Is the SIEM offered by a primary cloud service provider (CSP) who can provide world-scale infrastructure at wholesale prices?

Our experience shows that SIEM providers who operate in clouds that they don’t own have difficulty overcoming the inescapable “margin stacking” that comes with such models. A cloud-native SIEM deployment model allows the SIEM to scale up and down in response to new threats and manage the dynamic nature of an organization’s cloud workloads. Cloud-native SIEMs are also well-positioned to secure cloud workloads by providing low-latency data ingestion from cloud services and ship with detection content to help identify attacks common in the cloud. 

2. SIEM with Intelligence

Does the SIEM vendor have a continuous stream of frontline threat intelligence to drive out-of-the-box detection of new and emerging threats specific to your organization?

Threat intelligence is critical for organizations to effectively detect, triage, investigate, and respond to security incidents. Frontline threat intelligence in particular is valuable because it provides real-time information about the latest threats and vulnerabilities.

To improve real-time threat detection and response capabilities, security organizations are seeking seamless integration of threat intelligence and associated data feeds into their security operations workflows and tooling. 

3. SIEM with Curated Content

Does the SIEM offer an extensive library of supported parsers and detection rules, along with response actions?

Some SIEM vendors rely almost exclusively on their user community or technical alliance partners to create parsers for popular data feeds. While a thriving user community is essential, over-reliance on a user community to provide fundamental capabilities like parsing and detection rule content is a problem. Parsers for common data sources should be created, maintained, and supported directly by the SIEM vendor. Take the same approach when looking at detection rule content. Community rules are essential, but you should expect your vendor to create and maintain a solid library of core detections that are tested, supported, and improved regularly. 

4. SIEM with AI

Does the SIEM incorporate AI? Is it positioned to continue innovating? 

The role of AI in SIEM is still not fully understood (much less implemented) by any vendor. However, leading SIEMs already have tangible AI-driven features shipping today. These features include natural language processing for expressing searches and rules, automated case summarization, and recommended response actions. Most customers and industry observers consider features like threat detection and predictive adversary analysis to be some of the “holy grails” of AI-driven SIEM capabilities. No SIEM reliably offers these features today. As you choose a new SIEM in 2024, consider whether the vendor is investing the resources necessary to make meaningful progress on these transformational capabilities.


In Conclusion

In due time, migration from a legacy SIEM to a modern solution will be unavoidable to keep pace with the evolving threat landscape. While the challenges may seem overwhelming, a well-planned and well-executed migration can lead to significant improvements in your organization’s threat detection, investigation and response capabilities, and overall security posture.

For an in-depth guide to SIEM migration, check out the full paper, The great SIEM migration: A guide to ditching dinosaurs.

If you’re interested in learning about Google’s modern, cloud-native SecOps platform, please visit our web page.

The Rise of Digital Native Attacks

building

The shift toward cloud computing and away from traditional on-premises deployments has continued to grow. In parallel with this increase in system migrations to the cloud, Mandiant has also recently observed an equal increase in the stand up of new digital native organizations.

Digital native refers to the core technological functions or infrastructure used by a company. Mandiant refers to a digital native organization as one that is founded on and operates almost entirely within one or more cloud platforms.These companies typically begin with a small digital footprint and expand toward the use of other cloud and Software-as-a-Service (SaaS) offerings as the needs of their business grows.

The average employee experience at a digital native organization is to log in to the company’s chosen Single Sign On (SSO) provider (for example, Okta, OneLogin, M365, JumpCloud, and the like) and interact with the applications and cloud services required to complete their tasks. Typically, there is no dedicated VPN or identity management platforms (Active Directory) for employees to fall back on.

Despite this shift toward new technological infrastructure, both digital native and cloud computing, Mandiant responders have not observed a corresponding decrease in attacker activity given these unique environments. This may indicate that attackers are adapting their tactics, techniques, and procedures (TTPs) as quickly, if not faster, than companies are able to adapt their cyber defenses. Preparation for the future of the digital native threat landscape begins by breaking the assumption that digital native companies are secure by default. 


Challenges in Digital Native Security

The increase of digital native environments makes it clear that security teams, and specifically red and blue teams, must understand the security challenges associated with these new ecosystems, as they are unique compared to traditional on-premise deployments. The questions become, what are these unique challenges and what skills do cybersecurity professionals require to engage with them? 

Attackers and defenders are no longer in an arms race to secure a few key infrastructure assets, such as a Domain Controller or certificate service—instead, every cloud resource, SSO application, and data connection represents an equal threat to an organization. When digital native architecture is powered by technologies like APIs, IAM, Kubernetes, Docker, serverless functions, microservices, cloud databases, storage, networking, CI/CD pipelines, or other gateways, each represents a potential entry point for attackers. Without thorough security testing, how is an organization to know that the compromise of a third-party SaaS application could give an attacker unrestricted access to their cloud environment? The level of interconnectedness found inside mature digital native environments blurs the lines between third-party applications and organizational “crown jewels.” 

Another challenge unique to security teams in digital native environments are the protocols used to perform attacks. Because digital native environments are principally composed of applications and services designed for access through an employee’s web browser, many attacks occur at layer 7, where interactions with organizational resources are facilitated by common end user protocols, such as HTTP. This forces security professionals to take on the role of a user in the environment and interact with applications and services mostly through a web browser, instead of dedicated command line utilities. This restriction greatly increases the manual effort required to gain adequate situational awareness inside of a new digital native environment. 

However, a byproduct of this restriction is that traditional endpoint detection and response (EDR) solutions are much less effective. When both legitimate and malicious users are interacting with the environment in the same way, it greatly increases the amount of manual review required by defenders in order to distinguish between legitimate threats and benign user requests. In this aspect, securing digital native environments begins to resemble securing a series of individual web applications tied together by SSO and cloud platform trust relationships

Additionally, because each company requires a different set of applications and services to function, information gained about one corporate environment rarely transfers over to the understanding of a different one. This lack of standardization demands a higher level of creativity and ingenuity from security professionals.


A Frontline Case Study

Mandiant uses digital native attacker trends seen on the frontlines to influence the methodology used on red team engagements. One in particular surrounds a large digital native organization with the collaborative goal, set by Mandiant and the customer, to gain access to an internal administrative web application, also identified by the organization as a crown jewel. 

Reconnaissance

This crown jewel application had recently undergone a third-party code audit and used a well-known SSO provider for authentication. Knowing it was less likely that the application itself would be vulnerable to common web attacks, Mandiant red teamers prioritized their testing efforts on other SSO applications in the same environment that may have connections to the crown jewel application. These additional applications included access request management platforms, Kubernetes managers, and API development tools. Mandiant rigorously documented these connections and began assessing the levels of access available for each. 

Privilege Escalation

As the assessment continued, Mandiant found a privilege escalation vector that allowed for administrative access over a SaaS development application used to develop internal automation tools. At the start of the engagement, this SaaS development application was classified as “low risk” given that users inside of the application were still restricted to the roles and permissions assigned to them through the SSO provider. Initially, there was no indication that an administrator of this application would be able to gain additional access or permissions over other SSO applications in the environment. 

Using their administrative access to the SaaS development application, Mandiant found that there were a large number of employees authenticating to the application and running at least one of the tools inside of it every hour. Because of this, the red team engagement pivoted to focus on understanding these particular user workflows and how they could likely be compromised to gain access to other privileged resources within the environment.

Lateral Movement

Additional reconnaissance of the SaaS development application from this administrative perspective revealed that Mandiant had permissions to edit the settings used to connect users to the crown jewel application. Mandiant used these permissions to inject an additional request to the end of each user’s SSO authentication flow that would send plaintext SSO tokens to a Mandiant controlled server. After saving this new configuration, Mandiant red teamers received authentication requests from legitimate users of the crown jewel application. These requests contained credentials that Mandiant was able to leverage to impersonate the requesting user. After letting this injection run overnight, the red teamers were able to capture SSO tokens for a highly privileged administrative user and finally pivot to the crown jewel application.

In Conclusion

Digital native organizations and the cloud-based landscapes in which they exist, are constantly shifting. New technologies are being developed, implemented, and deprecated faster than ever. The only guarantee for these organizations is that cyber criminals will not wait for them to become comfortable with their new environments. Because of this, proactive security measures, including continuous cyber defense controls, ongoing red teaming, and specialized cloud security expertise are essential for proper cyber defense. 

Google Cloud