Jump to Content
Security & Identity

Unlocking the mystery of stronger security key management

December 21, 2020
https://storage.googleapis.com/gweb-cloudblog-publish/images/GCP_Security.max-2600x2600.jpg
Anton Chuvakin

Security Advisor, Office of the CISO, Google Cloud

Honna Segel

Product Manager

One of the “classic” data security mistakes involving encryption is encrypting the data and failing to secure the encryption key. To make matters worse, a sadly common issue is leaving the key “close” to data, such as in the same database or on the same system as the encrypted files. Such practices reportedly were a contributing factor for some prominent data breaches. Sometimes, an investigation revealed that encryption was implemented for compliance and without clear threat model thinking—key management was an afterthought or not even considered.

One could argue that the key must be better protected than the data it encrypts (or, more generally, that the key has to have stronger controls on it than the data it protects). If the key is stored close to the data, the implication is that the controls that secure the key are not, in fact, better.

Regulations do offer guidance on key management, but few give precise advice on where to keep the encryption keys relative to the encrypted data. Keeping the keys “far” from data is obviously a good security practice, but one that is sadly misunderstood by enough organizations. How do you even measure “far” in IT land? 

Now, let’s add cloud computing to the equation. One particular line of thinking that emerged in recent years was: “just like you cannot keep the key in the same database, you cannot keep it in the same cloud.”

The expected reaction here is that half of readers will say “Obviously!” while the other half may say “What? That’s crazy!” This is exactly why this is a great topic for analysis!

Now, first, let’s point out the obvious: there is no “the cloud.” And, no, this is not about a popular saying about it being “somebody else’s computer.” Here we are talking about the lack of anything monolithic that is called “the cloud.”

For example, when we encrypt data at rest, there is a range of key management options. In fact, we always use our default encryption and store keys securely (versus specific threat models and requirements) and transparently. You can read about it in detail in this paper. What you will notice, however, is that keys are always separated from encrypted data with many, many boundaries of many different types. For example, in application development, a common best practice is keeping your keys in a separate project from your workloads. So, these would introduce additional boundaries such as network, identity, configuration, service and likely other boundaries as well. The point is that keeping your keys “in the same cloud” does not really necessarily mean you are making the same mistake as keeping your keys in the same database .... except for a few special cases where it does (these are discussed below). 

In addition, cloud introduces a new dimension to the risk of keeping the key ‘close to’ the data: where the key is stored physically versus who controls the key. For example, is the key close to data if it is located inside a secure hardware device (i.e., an HSM) that is located on the same network (or: in the same cloud data center) as data? Or, is the key close to data if it is located inside a system in another country, but people with credentials to access the data can also access the key with them? This also raises a question of who is ultimately responsible if the key is compromised, that complicates the matter even more. All these raise interesting dimensions to explore.

Finally, keep in mind that most of the discussion here focuses on data at rest (and perhaps a bit on data in transit, but not on data in use).

Risks

Now that we understand that the concept of “in the same cloud” is nuanced, let’s look at the risks and requirements that are driving behavior regarding encryption key storage.

Before we start, note that if you have a poorly architected on-premise application that does store the keys in the same database or on the same disk as your encrypted data, and this application is migrated to the cloud, the problem of course migrates to the cloud as well. The solution to this challenge can be to use the cloud native key management mechanisms (and, yes, that does involve changing the application).   

That said, here are some of the relevant risks and issues:

Human error: First, one very visible risk is of course a non-malicious human error leading to key disclosure, loss, theft, etc. Think developer mistakes, use of a poor source of entropy, misconfigured or loose permissions, etc. There is nothing cloud-specific about them, but their impact tends to be more damaging in the public cloud. In theory, cloud provider mistakes leading to potential key disclosure are in this bucket as well.

External attacker: Second, key theft by an external attacker is also a challenge dating back from a pre-cloud era. Top tier actors have been known to attack key management systems (KMS) to gain wider access to data. They also know how to access and read application logs as well as observe application network traffic—all of which may provide hints as to where keys are located. Instinctively, many security professionals who gained most of their experience before the cloud feel better about a KMS sitting behind layers of firewalls. External attackers tend to find the above-mentioned human errors and turn these weaknesses into compromises as a result.

Insider threat: Third, and this is where the things get interesting: what about the insiders? Cloud computing models imply two different insider models: insiders from the cloud user organization and those from a cloud provider. While some of the public attention focuses on the CSP insiders, it’s the customer insider who usually has the valid credentials to access the data. While some CSP provider employees could (theoretically and subject to many security controls with massive collusion levels needed) access the data, it is the cloud customers’ insiders who actually have direct access to their data in the cloud via valid credentials. From a threat modeling perspective, most bad actors will find the weakest link - probably at the cloud user organization - to exploit first before exerting more effort.

Compliance: Fourth, there may be mandates and regulations that prescribe key handling in a particular manner. Many of them predate cloud computing, hence they will not offer explicit guidance for the cloud case. It is useful to differentiate explicit requirements, implied requirements and what can be called “interpreted” or internal requirements. For example, an organization may have a policy to always keep encryption keys in a particular system, secured in a particular manner. Such internal policies may have been in place for years, and their exact risk-based origin is often hard to trace because such origin may be decades old. In fact, complex, often legacy, security systems and practices might actually be made more simple (and comprehensible) with more modern techniques afforded through cloud computing resources and practices.

Furthermore, some global enterprises may have been subject to some sort of legal matter settled and sealed with a state or government entity separate from any type of regulatory compliance activity. In these cases, the obligations might require some technical safeguards in place that cannot be broadly shared within the organization.

Data sovereignty: Finally, and this is where things rapidly veer outside of the digital domain, there are risks that sit outside of the cybersecurity realm. These may be connected to various issues of data sovereignty and digital sovereignty, and even geopolitical risks. To make this short, it does not matter whether these risks are real or perceived (or whether merely holding the key would ultimately prevent such a disclosure). They do drive requirements for direct control of the encryption keys. For example, it was reported that fear of “blind or third party subpoenas” have been driving some of organizations' data security decisions. 

Are these five risks above “real”? Does it matter—if the risks are not real, but an organization plans to act as if they are? And if an organization were to take them seriously, what architectural choices they have?

Architectures and Approaches

First, a sweeping statement: modern cloud architectures actually make some of the encryption mistakes less likely to be committed. If a particular user role has no access to cloud KMS, there is no way to “accidentally” get the keys (equivalent to finding them on disk in a shared directory, for example). In fact, identity does serve as a strong boundary in the cloud. 

It is notable that trusting, say, a firewall (network boundary) more than a well-designed authentication system (identity boundary) is a relic of pre-cloud times. Moreover, cloud access control or cloud logs of each time a key is used, how, and by whom, may be better security than most on-prem could aspire to.

Cloud Encryption Keys Stored in Software-Based Systems

For example, if there is a need to apply specific key management practices (internal compliance, risks, location, revocation, etc), one can use Google Cloud KMS with CMEK. Now, taking the broad definition, the key is in the same cloud (Google Cloud), but the key is definitely not in the same place as data (details how the keys are stored). People who can get to the data (such as via valid credentials for data access i.e. client insiders) cannot get to the key, unless they have specific access permissions to access KMS (identity serves as a strong boundary).  So, no app developer can accidentally get the keys or design the app with embedded keys.

This addresses most of the above risks, but—quite obviously—does not address some of them. Note that while the cloud customer does not control the safeguards separating the keys from data, they can read up on them.

Cloud Encryption Keys Stored in Hardware-Based Systems

Next, if there is a need to make sure a human cannot get to the key, no matter what their account permissions are, a Cloud HSM is a way to store keys inside a hardware device. In this case, the boundary that separates keys from data is not just identity, but the security characteristics of a hardware device and all the validated security controls applied to and around the device location. This addresses nearly all of the above risks, but does not address all of them. It also incurs some costs and possible frictions

Here, too, although the cloud customer can request assurance of the use of a hardware security device and other controls,  the cloud customer does not control the safeguards separating the keys from data—still relying on the trust of the cloud service provider’s handling of the hardware. So, although access to the key material is more restricted with HSM keys than with software keys, access to use of the keys is not inherently more secure. Also, the key inside an HSM hosted by the provider is seen as being under logical or physical control of the cloud provider, hence not fitting the true Hold Your Own Key (HYOK) requirement letter or spirit.

Cloud Encryption Keys Stored Outside Provider Infrastructure

Finally, there is a way to actually address the risks above, including the last item related to geopolitical issues. And the decision is simply to practice Hold Your Own Key (HYOK) implemented using technologies such as Google Cloud External Key Manager (EKM). In this scenario, provider bugs, mistakes, external attacks to provider networks, cloud provider insiders don’t matter as the key never appears there. A cloud provider cannot disclose the encryption key to anybody because they do not have them. This addresses all of the above risks, but incurs some costs and possible frictions. Here, the cloud customer controls the safeguards separating the keys from data, and can request assurance of how the EKM technology is implemented. 

Naturally, this approach is critically different from any other approach as even customer-managed HSM devices located at the cloud provider data center do not provide the same level of assurance.

Key takeaways

  • There is no blanket ban for keeping keys with the same cloud provider as your data or “in the same cloud.” The very concept of “key in the same cloud” is nuanced and needs to be reviewed in light of your regulations and threat models—some risks may be new but some will be wholly mitigated by a move to cloud. Review your risks, risk tolerances, and motivations that drive your key management decisions.

  • Consider taking an inventory of your keys and note how far or close they are to your data. More generally, are they better protected than the data? Do the protections match the threat model you have in mind?  If new potential threats are uncovered, deploy the necessary controls in the environment.

  • Advantages for key management using your Google Cloud KMS include comprehensive and consistent IAM, policy, access justification, logging as well as likely higher agility for projects that use cloud native technologies. So, use your cloud provider KMS for most situations not calling for externalized trust or other situations.

  • Cases for where you do need to keep keys off the cloud are clearly specified by regulation or business requirements; a set of common situations for this will be discussed in the next blog. Stay tuned!

Posted in