Remediating Security Health Analytics findings

This page provides a list of reference guides and techniques for remediating Security Health Analytics findings using Security Command Center.

You need adequate Identity and Access Management (IAM) roles to view or edit findings, and to access or modify Google Cloud resources. If you encounter permissions errors when accessing Security Command Center in the Google Cloud console, ask your administrator for assistance and, to learn about roles, see Access control. To resolve resource errors, read documentation for affected products.

Security Health Analytics remediation

This section includes remediation instructions for all Security Health Analytics findings.

Deactivation of findings after remediation

After you remediate a vulnerability or misconfiguration finding, Security Health Analytics automatically sets the state of the finding to INACTIVE the next time it scans for the finding. How long Security Health Analytics takes to set a remediated finding to INACTIVE depends on when the finding is fixed and the schedule of the scan that detects the finding.

Security Health Analytics also sets the state of a finding to INACTIVE when a scan detects that the resource that is affected by the finding is deleted. If you want to remove a finding for a deleted resource from your display while you are waiting for Security Health Analytics to detect that the resource is deleted, you can mute the finding. To mute a finding, see Mute findings in Security Command Center.

Do not use mute to hide remediated findings for existing resources. If the issue recurs and Security Health Analytics restores the ACTIVE state of the finding, you might not see the reactivated finding, because muted findings are excluded from any finding query that specifies NOT mute="MUTED", such as the default finding query.

For information about scan intervals, see Security Health Analytics scan types.

Access Transparency disabled

Category name in the API: ACCESS_TRANSPARENCY_DISABLED

Access Transparency logs when Google Cloud employees access the projects in your organization to provide support. Enable Access Transparency to log who from Google Cloud is accessing your information, when, and why. For more information, see Access Transparency.

To enable Access Transparency on a project, the project must be associated with a billing account.

Required roles

To get the permissions that you need to perform this task, ask your administrator to grant you the Access Transparency Admin (roles/axt.admin) IAM role at the organization level. For more information about granting roles, see Manage access.

This predefined role contains the permissions axt.labels.get and axt.labels.set, which are required to perform this task. You might also be able to get these permissions with a custom role or other predefined roles.

Remediation steps

To remediate this finding, complete the following steps:

  1. Check your organization-level permissions:

    1. Go to the Identity and Access Management page on the Google Cloud console.

      Go to Identity and Access Management

    2. If you're prompted, select the Google Cloud organization in the selector menu.

  2. Select any Google Cloud project within the organization using the selector menu.

    Access Transparency is configured on a Google Cloud project page but Access Transparency is enabled for the entire organization.

  3. Go to the IAM & Admin > Settings page.

  4. Click Enable Access Transparency.

Learn about this finding type's supported assets and scan settings.

AlloyDB auto backup disabled

Category name in the API: ALLOYDB_AUTO_BACKUP_DISABLED

An AlloyDB for PostgreSQL cluster doesn't have automatic backups enabled.

To help prevent data loss, turn on automated backups for your cluster. For more information, see Configure additional automated backups.

To remediate this finding, complete the following steps:

  1. Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.

    Go to AlloyDB for PostgreSQL clusters

  2. Click a cluster in the Resource Name column.

  3. Click Data protection.

  4. Under the Automated backup policy section, click Edit in the Automated backups row.

  5. Select the Automate backups checkbox.

  6. Click Update.

Learn about this finding type's supported assets and scan settings.

AlloyDB log min error statement severity

Category name in the API: ALLOYDB_LOG_MIN_ERROR_STATEMENT_SEVERITY

An AlloyDB for PostgreSQL instance does not have the log_min_error_statement database flag set to error or another recommended value.

The log_min_error_statement flag controls whether SQL statements that cause error conditions are recorded in server logs. SQL statements of the specified severity or higher are logged. The higher the severity, the fewer messages that are recorded. If set to a severity level that is too high, error messages might not be logged.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.

    Go to AlloyDB for PostgreSQL clusters

  2. Click the cluster in the Resource Name column.

  3. Under the Instances in your cluster section, click Edit for the instance.

  4. Click Advanced Configuration Options.

  5. Under the Flags section, set the log_min_error_statement database flag with one of the following recommended values, according to your organization's logging policy.

    • debug5
    • debug4
    • debug3
    • debug2
    • debug1
    • info
    • notice
    • warning
    • error
  6. Click Update Instance.

Learn about this finding type's supported assets and scan settings.

AlloyDB log min messages

Category name in the API: ALLOYDB_LOG_MIN_MESSAGES

An AlloyDB for PostgreSQL instance does not have the log_min_messages database flag set to at minimum warning.

The log_min_messages flag controls which message levels are recorded in server logs. The higher the severity, the fewer messages are recorded. Setting the threshold too low can result in increased log storage size and length, making it difficult to find actual errors.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.

    Go to AlloyDB for PostgreSQL clusters

  2. Click the cluster in the Resource Name column.

  3. Under the Instances in your cluster section, click Edit for the instance.

  4. Click Advanced Configuration Options.

  5. Under the Flags section, set the log_min_messages database flag with one of the following recommended values, according to your organization's logging policy.

    • debug5
    • debug4
    • debug3
    • debug2
    • debug1
    • info
    • notice
    • warning
  6. Click Update Instance.

Learn about this finding type's supported assets and scan settings.

AlloyDB log error verbosity

Category name in the API: ALLOYDB_LOG_ERROR_VERBOSITY

An AlloyDB for PostgreSQL instance does not have the log_error_verbosity database flag set to default or another less restrictive value.

The log_error_verbosity flag controls the amount of detail in messages logged. The greater the verbosity, the more details are recorded in messages. We recommend setting this flag to default or another less restrictive value.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.

    Go to AlloyDB for PostgreSQL clusters

  2. Click the cluster in the Resource Name column.

  3. Under the Instances in your cluster section, click Edit for the instance.

  4. Click Advanced Configuration Options.

  5. Under the Flags section, set the log_error_verbosity database flag with one of the following recommended values, according to your organization's logging policy.

    • default
    • verbose
  6. Click Update Instance.

Learn about this finding type's supported assets and scan settings.

Admin service account

Category name in the API: ADMIN_SERVICE_ACCOUNT

A service account in your organization or project has Admin, Owner, or Editor privileges assigned to it. These roles have broad permissions and shouldn't be assigned to service accounts. To learn about service accounts and the roles available to them, see Service accounts.

To remediate this finding, complete the following steps:

  1. Go to the IAM policy page in the Google Cloud console.

    Go to IAM policy

  2. For each principal identified in the finding:

    1. Click Edit next to the principal.
    2. To remove permissions, click Delete next to the offending role.
    3. Click Save.

Learn about this finding type's supported assets and scan settings.

Alpha cluster enabled

Category name in the API: ALPHA_CLUSTER_ENABLED

Alpha cluster features are enabled for a Google Kubernetes Engine (GKE) cluster.

Alpha clusters let early adopters experiment with workloads that use new features before they're released to the general public. Alpha clusters have all GKE API features enabled, but aren't covered by the GKE SLA, don't receive security updates, have node auto-upgrade and node auto-repair disabled, and can't be upgraded. They're also automatically deleted after 30 days.

To remediate this finding, complete the following steps:

Alpha clusters can't be disabled. You must create a new cluster with alpha features disabled.

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Click Create.

  3. Select Configure next to the type of cluster you want to create.

  4. Under the Features tab, ensure Enable Kubernetes alpha features in this cluster is disabled.

  5. Click Create.

  6. To move workloads to the new cluster, see Migrating workloads to different machine types.

  7. To delete the original cluster, see Deleting a cluster.

Learn about this finding type's supported assets and scan settings.

API key APIs unrestricted

Category name in the API: API_KEY_APIS_UNRESTRICTED

There are API keys being used too broadly.

Unrestricted API keys are insecure because they can be retrieved from devices on which the key is stored or can be seen publicly, for instance, from within a browser. In accordance with the principle of least privilege, configure API keys to only call APIs required by the application. For more information, see Apply API key restrictions.

To remediate this finding, complete the following steps:

  1. Go to the API keys page in the Google Cloud console.

    Go to API keys

  2. For each API key:

    1. In the API keys section, on the row for each API key for which you need to restrict APIs, display the Actions menu by clicking the icon.
    2. From the Actions menu, click Edit API key. The Edit API key page opens.
    3. In the API restrictions section, select Restrict APIs. The Select APIs drop-down menu appears.
    4. On the Select APIs drop-down list, select which APIs to allow.
    5. Click Save. It might take up to five minutes for settings to take effect.

Learn about this finding type's supported assets and scan settings.

API key apps unrestricted

Category name in the API: API_KEY_APPS_UNRESTRICTED

There are API keys being used in an unrestricted way, allowing use by any untrusted app.

Unrestricted API keys are insecure because they can be retrieved on devices on which the key is stored or can be seen publicly, for instance, from within a browser. In accordance with the principle of least privilege, restrict API key usage to trusted hosts, HTTP referrers, and apps. For more information, see Apply API key restrictions.

To remediate this finding, complete the following steps:

  1. Go to the API keys page in the Google Cloud console.

    Go to API keys

  2. For each API key:

    1. In the API keys section, on the row for each API key for which you need to restrict applications, display the Actions menu by clicking the icon.
    2. From the Actions menu, click Edit API key. The Edit API key page opens.
    3. On the Edit API key page, under Application restrictions, select a restriction category. You can set one application restriction per key.
    4. In the Add an item field that appears when you select a restriction, click Add an item to add restrictions based on the needs of your application.
    5. Once finished adding items, click Done.
    6. Click Save.

Learn about this finding type's supported assets and scan settings.

API key exists

Category name in the API: API_KEY_EXISTS

A project is using API keys instead of standard authentication.

API keys are less secure than other authentication methods because they are simple encrypted strings and easy for others to discover and use. They can be retrieved on devices on which the key is stored or can be seen publicly, for instance, from within a browser. Also, API keys do not uniquely identify users or applications making requests. As an alternative, you can use a standard authentication flow, with either service accounts or user accounts.

To remediate this finding, complete the following steps:

  1. Ensure your applications are configured with an alternate form of authentication.
  2. Go to the API credentials page in the Google Cloud console.

    Go to API credentials

  3. In the API keys section on the row for each API key that you need to delete, display the Actions menu by clicking the icon.

  4. From the Actions menu, click Delete API key.

Learn about this finding type's supported assets and scan settings.

API key not rotated

Category name in the API: API_KEY_NOT_ROTATED

An API key hasn't been rotated for more than 90 days.

API keys do not expire, so if one is stolen, it might be used indefinitely unless the project owner revokes or rotates the key. Regenerating API keys frequently reduces the amount of time that a stolen API key can be used to access data on a compromised or terminated account. Rotate API keys at least every 90 days. For more information, see Secure an API key.

To remediate this finding, complete the following steps:

  1. Go to the API keys page in the Google Cloud console.

    Go to API keys

  2. For each API key:

    1. In the API keys section, on the row for each API key that you need to rotate, display the Actions menu by clicking the icon.
    2. From the Actions menu, click Edit API key. The Edit API key page opens.
    3. On the Edit API key page, if the date in the Creation date field is older than 90 days, replace the key by clicking Regenerate key at the top of page. A new replacement key is generated.
    4. Click Save.
    5. To ensure your applications continue working uninterrupted, update them to use the new API key. The old API key works for 24 hours before it is permanently deactivated.

Learn about this finding type's supported assets and scan settings.

Audit config not monitored

Category name in the API: AUDIT_CONFIG_NOT_MONITORED

Log metrics and alerts aren't configured to monitor audit configuration changes.

Cloud Logging produces Admin Activity and Data Access logs that enable security analysis, resource change tracking, and compliance auditing. By monitoring audit configuration changes, you ensure that all activities in your project can be audited at any time. For more information, see Overview of logs-based metrics.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, create metrics, if necessary, and alert policies:

Create metric

  1. Go to the Logs-based Metrics page in the Google Cloud console.

    Go to Logs-based Metrics

  2. Click Create Metric.

  3. Under Metric Type, select Counter.

  4. Under Details:

    1. Set a Log metric name.
    2. Add a description.
    3. Set Units to 1.
  5. Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:

      protoPayload.methodName="SetIamPolicy"
      AND protoPayload.serviceData.policyDelta.auditConfigDeltas:*
    

  6. Click Create Metric. You see a confirmation.

Create Alert Policy

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log-based Metrics:

    Go to Log-based Metrics

  2. Under the User-defined metrics section, select the metric you created in the previous section.
  3. Click More , and then click Create alert from metric.

    The New condition dialog opens with the metric and data transformation options pre-populated.

  4. Click Next.
    1. Review the pre-populated settings. You might want to modify the Threshold value.
    2. Click Condition name and enter a name for the condition.
  5. Click Next.
  6. To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.

  7. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  8. Optional: Click Documentation, and then add any information that you want included in a notification message.
  9. Click Alert name and enter a name for the alerting policy.
  10. Click Create Policy.

Learn about this finding type's supported assets and scan settings.

Audit logging disabled

Category name in the API: AUDIT_LOGGING_DISABLED

This finding isn't available for project-level activations.

Audit logging is disabled for one or more Google Cloud services, or one or more principals are exempt from data access audit logging.

Enable Cloud Logging for all services to track all admin activities, read access, and write access to user data. Depending on the quantity of information, Cloud Logging costs can be significant. To understand your usage of the service and its cost, see Cost optimization for Google Cloud Observability.

If any principals are exempted from data access audit logging on either the default data access audit logging configuration or the logging configurations for any individual services, remove the exemption.

To remediate this finding, complete the following steps:

  1. Go to the Data Access audit logs default configuration page in the Google Cloud console.

    Go to default configuration

  2. On the Log types tab, activate data access audit logging in the the default configuration:

    1. Select Admin Read, Data Read, and Data Write.
    2. Click Save.
  3. On the Exempted principals tab, remove all exempted users from the default configuration:

    1. Remove each listed principal by clicking Delete next to each name.
    2. Click Save.
  4. Go to the Audit Logs page.

    Go to audit logs

  5. Remove any exempted principals from the data access audit log configurations of individual services.

    1. Under Data access audit logs configuration, for each service that shows an exempted principal, click on the service. An audit log configuration panel opens for the service.
    2. On the Exempted principals tab, remove all exempted principals by clicking Delete next to each name.
    3. Click Save.

Learn about this finding type's supported assets and scan settings.

Auto backup disabled

Category name in the API: AUTO_BACKUP_DISABLED

A Cloud SQL database doesn't have automatic backups enabled.

To prevent data loss, turn on automated backups for your SQL instances. For more information, see Creating and managing on-demand and automatic backups.

To remediate this finding, complete the following steps:

  1. Go to the SQL instance backups page in the Google Cloud console.

    Go to SQL instance backups

  2. Next to Settings, click Edit .

  3. Select the box for Automate backups.

  4. In the drop-down menu, choose a window of time for your data to be automatically backed up.

  5. Click Save.

Learn about this finding type's supported assets and scan settings.

Auto repair disabled

Category name in the API: AUTO_REPAIR_DISABLED

A Google Kubernetes Engine (GKE) cluster's auto repair feature, which keeps nodes in a healthy, running state, is disabled.

When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node. For more information, see Auto-repairing nodes.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Click the Nodes tab.

  3. For each node pool:

    1. Click the name of the node pool to go to its detail page.
    2. Click Edit .
    3. Under Management, select Enable auto-repair.
    4. Click Save.

Learn about this finding type's supported assets and scan settings.

Auto upgrade disabled

Category name in the API: AUTO_UPGRADE_DISABLED

A GKE cluster's auto upgrade feature, which keeps clusters and node pools on the latest stable version of Kubernetes, is disabled.

For more information, see Auto-upgrading nodes.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. In the list of clusters, click the name of the cluster.

  3. Click the Nodes tab.

  4. For each node pool:

    1. Click the name of the node pool to go to its detail page.
    2. Click Edit .
    3. Under Management, select Enable auto-upgrade.
    4. Click Save.

Learn about this finding type's supported assets and scan settings.

BigQuery table CMEK disabled

Category name in the API: BIGQUERY_TABLE_CMEK_DISABLED

A BigQuery table is not configured to use a customer-managed encryption key (CMEK).

With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google Cloud uses to encrypt your data, giving you more control over access to your data. For more information, see Protecting data with Cloud KMS keys.

To remediate this finding, complete the following steps:

  1. Create a table protected by Cloud Key Management Service.
  2. Copy your table to the new CMEK-enabled table.
  3. Delete the original table.

To set a default CMEK key that encrypts all new tables in a dataset, see Set a dataset default key.

Learn about this finding type's supported assets and scan settings.

Binary authorization disabled

Category name in the API: BINARY_AUTHORIZATION_DISABLED

Binary Authorization is disabled on a GKE cluster.

Binary Authorization includes an optional feature that protects supply chain security by only allowing container images signed by trusted authorities during the development process to be deployed in the cluster. By enforcing signature-based deployment, you gain tighter control over your container environment, ensuring only verified images are allowed to be deployed.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. In the Security section, click on the edit icon () in the Binary Authorization row.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  3. In the dialog, select Enable Binary Authorization.

  4. Click Save changes.

  5. Go to the Binary Authorization setup page.

    Go to Binary Authorization

  6. Ensure a policy that requires attestors is configured and the project default rule is not configured to Allow all images. For more information, see Set up for GKE.

    To ensure that images that violate the policy are allowed to be deployed and violations are logged to Cloud Audit Logs, you can enable dry-run mode.

Learn about this finding type's supported assets and scan settings.

Bucket CMEK disabled

Category name in the API: BUCKET_CMEK_DISABLED

A bucket is not encrypted with customer-managed encryption keys (CMEK).

Setting a default CMEK on a bucket gives you more control over access to your data. For more information, see Customer-managed encryption keys.

To remediate this finding, use CMEK with a bucket by following Using customer-managed encryption keys. CMEK incurs additional costs related to Cloud KMS.

Learn about this finding type's supported assets and scan settings.

Bucket IAM not monitored

Category name in the API: BUCKET_IAM_NOT_MONITORED

Log metrics and alerts aren't configured to monitor Cloud Storage IAM permission changes.

Monitoring changes to Cloud Storage bucket permissions helps you identify over-privileged users or suspicious activity. For more information, see Overview of logs-based metrics.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

Create metric

  1. Go to the Logs-based Metrics page in the Google Cloud console.

    Go to Logs-based Metrics

  2. Click Create Metric.

  3. Under Metric Type, select Counter.

  4. Under Details:

    1. Set a Log metric name.
    2. Add a description.
    3. Set Units to 1.
  5. Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:

      resource.type=gcs_bucket
      AND protoPayload.methodName="storage.setIamPermissions"
    

  6. Click Create Metric. You see a confirmation.

Create Alert Policy

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log-based Metrics:

    Go to Log-based Metrics

  2. Under the User-defined metrics section, select the metric you created in the previous section.
  3. Click More , and then click Create alert from metric.

    The New condition dialog opens with the metric and data transformation options pre-populated.

  4. Click Next.
    1. Review the pre-populated settings. You might want to modify the Threshold value.
    2. Click Condition name and enter a name for the condition.
  5. Click Next.
  6. To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.

  7. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  8. Optional: Click Documentation, and then add any information that you want included in a notification message.
  9. Click Alert name and enter a name for the alerting policy.
  10. Click Create Policy.

Learn about this finding type's supported assets and scan settings.

Bucket logging disabled

Category name in the API: BUCKET_LOGGING_DISABLED

There is a storage bucket without logging enabled.

To help investigate security issues and monitor storage consumption, enable access logs and storage information for your Cloud Storage buckets. Access logs provide information for all requests made on a specified bucket, and the storage logs provide information about the storage consumption of that bucket.

To remediate this finding, set up logging for the bucket indicated by the Security Health Analytics finding by completing the usage logs & storage logs guide.

Learn about this finding type's supported assets and scan settings.

Bucket policy only disabled

Category name in the API: BUCKET_POLICY_ONLY_DISABLED

Uniform bucket-level access, previously called Bucket Policy Only, isn't configured.

Uniform bucket-level access simplifies bucket access control by disabling object-level permissions (ACLs). When enabled, only bucket-level IAM permissions grant access to the bucket and the objects it contains. For more information, see Uniform bucket-level access.

To remediate this finding, complete the following steps:

  1. Go to the Cloud Storage browser page in the Google Cloud console.

    Go to Cloud Storage browser

  2. In the list of buckets, click the name of the desired bucket.

  3. Click the Configuration tab.

  4. Under Permissions, in the row for Access control, click the edit icon ().

  5. In the dialog, select Uniform.

  6. Click Save.

Learn about this finding type's supported assets and scan settings.

Cloud Asset API disabled

Category name in the API: CLOUD_ASSET_API_DISABLED

Cloud Asset Inventory service is not enabled for the project.

To remediate this finding, complete the following steps:

  1. Go to the API Library page in the Google Cloud console.

    Go to API Library

  2. Search for Cloud Asset Inventory.

  3. Select the result for Cloud Asset API service.

  4. Ensure that API Enabled is displayed.

Cluster logging disabled

Category name in the API: CLUSTER_LOGGING_DISABLED

Logging isn't enabled for a GKE cluster.

To help investigate security issues and monitor usage, enable Cloud Logging on your clusters.

Depending on the quantity of information, Cloud Logging costs can be significant. To understand your usage of the service and its cost, see Cost optimization for Google Cloud Observability.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Select the cluster listed in the Security Health Analytics finding.

  3. Click Edit.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  4. On the Legacy Stackdriver Logging or Stackdriver Kubernetes Engine Monitoring drop-down list, select Enabled.

    These options aren't compatible. Make sure that you use either Stackdriver Kubernetes Engine Monitoring alone, or Legacy Stackdriver Logging with Legacy Stackdriver Monitoring.

  5. Click Save.

Learn about this finding type's supported assets and scan settings.

Cluster monitoring disabled

Category name in the API: CLUSTER_MONITORING_DISABLED

Monitoring is disabled on GKE clusters.

To help investigate security issues and monitor usage, enable Cloud Monitoring on your clusters.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Select the cluster listed in the Security Health Analytics finding.

  3. Click Edit.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  4. On the Legacy Stackdriver Monitoring or Stackdriver Kubernetes Engine Monitoring drop-down list, select Enabled.

    These options aren't compatible. Make sure that you use either Stackdriver Kubernetes Engine Monitoring alone, or Legacy Stackdriver Monitoring with Legacy Stackdriver Logging.

  5. Click Save.

Learn about this finding type's supported assets and scan settings.

Cluster private Google access disabled

Category name in the API: CLUSTER_PRIVATE_GOOGLE_ACCESS_DISABLED

Cluster hosts are not configured to use only private, internal IP addresses to access Google APIs.

Private Google Access enables virtual machine (VM) instances with only private, internal IP addresses to reach the public IP addresses of Google APIs and services. For more information, see Configuring Google Private Access.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the Virtual Private Cloud networks page in the Google Cloud console.

    Go to VPC networks

  2. In the list of networks, click the name of the desired network.

  3. On the VPC network details page, click the Subnets tab.

  4. In the list of subnets, click the name of the subnet associated with the Kubernetes cluster in the finding.

  5. On the Subnet details page, click Edit .

  6. Under Private Google Access, select On.

  7. Click Save.

  8. To remove public (external) IPs from VM instances whose only external traffic is to Google APIs, see Unassigning a static external IP address.

Learn about this finding type's supported assets and scan settings.

Cluster secrets encryption disabled

Category name in the API: CLUSTER_SECRETS_ENCRYPTION_DISABLED

Application-layer secrets encryption is disabled on a GKE cluster.

Application-layer secrets encryption ensures GKE secrets are encrypted using Cloud KMS keys. The feature provides an additional layer of security for sensitive data, such as user-defined secrets and secrets required for the operation of the cluster, such as service account keys, which are all stored in etcd.

To remediate this finding, complete the following steps:

  1. Go to the Cloud KMS keys page in the Google Cloud console.

    Go to Cloud KMS keys

  2. Review your application keys or create a database encryption key (DEK). For more information, see Creating a Cloud KMS key.

  3. Go to the Kubernetes clusters page.

    Go to Kubernetes clusters

  4. Select the cluster in the finding.

  5. Under Security, in the Application-layer secrets encryption field, click Edit Application-layer Secrets Encryption.

  6. Select the Enable Application-layer Secrets Encryption checkbox, and then choose the DEK you created.

  7. Click Save Changes.

Learn about this finding type's supported assets and scan settings.

Cluster shielded nodes disabled

Category name in the API: CLUSTER_SHIELDED_NODES_DISABLED

Shielded GKE nodes are not enabled for a cluster.

Without Shielded GKE nodes, attackers can exploit a vulnerability in a Pod to exfiltrate bootstrap credentials and impersonate nodes in your cluster. The vulnerability can give attackers access to cluster secrets.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Select the cluster in the finding.

  3. Under Security, in the Shielded GKE nodes field, click Edit Shielded GKE nodes.

  4. Select the Enable Shielded GKE nodes checkbox.

  5. Click Save Changes.

Learn about this finding type's supported assets and scan settings.

Compute project wide SSH keys allowed

Category name in the API: COMPUTE_PROJECT_WIDE_SSH_KEYS_ALLOWED

Project-wide SSH keys are used, allowing login to all instances in the project.

Using project-wide SSH keys makes SSH key management easier but, if compromised, poses a security risk which can impact all instances within a project. You should use instance-specific SSH keys, which limit the attack surface if SSH keys are compromised. For more information, see Managing SSH keys in metadata.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, click the name of the instance in the finding.

  3. On the VM instance details page, click Edit.

  4. Under SSH Keys, select Block project-wide SSH keys.

  5. Click Save.

Learn about this finding type's supported assets and scan settings.

Compute Secure Boot disabled

Category name in the API: COMPUTE_SECURE_BOOT_DISABLED

A Shielded VM does not have Secure Boot enabled.

Using Secure Boot helps protect your virtual machines against rootkits and bootkits. Compute Engine does not enable Secure Boot by default because some unsigned drivers and low-level software are not compatible. If your VM does not use incompatible software and it boots with Secure Boot enabled, Google recommends using Secure Boot. If you are using third-party modules with Nvidia drivers, make sure they are compatible with Secure Boot before your enable it.

For more information, see Secure Boot.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, click the name of the instance in the finding.

  3. On the VM instance details page, click Stop.

  4. After the instance stops, click Edit.

  5. Under Shielded VM, select Turn on Secure Boot.

  6. Click Save.

  7. Click Start to start the instance.

Learn about this finding type's supported assets and scan settings.

Compute serial ports enabled

Category name in the API: COMPUTE_SERIAL_PORTS_ENABLED

Serial ports are enabled for an instance, allowing connections to the instance's serial console.

If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore, interactive serial console support should be disabled. For more information, see Enabling access for a project.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, click the name of the instance in the finding.

  3. On the VM instance details page, click Edit.

  4. Under Remote access, clear Enable connecting to serial ports.

  5. Click Save.

Learn about this finding type's supported assets and scan settings.

Confidential Computing disabled

Category name in the API: CONFIDENTIAL_COMPUTING_DISABLED

A Compute Engine instance doesn't have Confidential Computing enabled.

Confidential Computing adds a third pillar to the end-to-end encryption story by encrypting data while in use. With the confidential execution environments provided by Confidential Computing and AMD Secure Encrypted Virtualization (SEV), Google Cloud keeps sensitive code and other data encrypted in memory during processing.

Confidential Computing can only be enabled when an instance is created. Thus, you must delete the current instance and create a new one.

For more information, see Confidential VM and Compute Engine.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, click the name of the instance in the finding.

  3. On the VM instance details page, click Delete.

  4. Create a Confidential VM using the Google Cloud console.

Learn about this finding type's supported assets and scan settings.

COS not used

Category name in the API: COS_NOT_USED

Compute Engine VMs aren't using the Container-Optimized OS, which is designed to run Docker containers on Google Cloud securely.

Container-Optimized OS is Google's recommended OS for hosting and running containers on Google Cloud. Its small OS footprint minimizes security exposure, while automatic updates patch security vulnerabilities in a timely manner. For more information, see Container-Optimized OS Overview.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. In the list of clusters, click the name of the cluster in the finding.

  3. Click the Nodes tab.

  4. For each node pool:

    1. Click the name of the node pool to go to its detail page.
    2. Click Edit .
    3. Under Nodes -> Image type, click Change.
    4. Select Container-Optimized OS, and then click Change.
    5. Click Save.

Learn about this finding type's supported assets and scan settings.

Custom role not monitored

Category name in the API: CUSTOM_ROLE_NOT_MONITORED

Log metrics and alerts aren't configured to monitor custom role changes.

IAM provides predefined and custom roles that grant access to specific Google Cloud resources. By monitoring role creation, deletion, and update activities, you can identify over-privileged roles at early stages. For more information, see Overview of logs-based metrics.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

Create metric

  1. Go to the Logs-based Metrics page in the Google Cloud console.

    Go to Logs-based Metrics

  2. Click Create Metric.

  3. Under Metric Type, select Counter.

  4. Under Details:

    1. Set a Log metric name.
    2. Add a description.
    3. Set Units to 1.
  5. Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:

      resource.type="iam_role"
      AND (protoPayload.methodName="google.iam.admin.v1.CreateRole"
      OR protoPayload.methodName="google.iam.admin.v1.DeleteRole"
      OR protoPayload.methodName="google.iam.admin.v1.UpdateRole")
    

  6. Click Create Metric. You see a confirmation.

Create Alert Policy

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log-based Metrics:

    Go to Log-based Metrics

  2. Under the User-defined metrics section, select the metric you created in the previous section.
  3. Click More , and then click Create alert from metric.

    The New condition dialog opens with the metric and data transformation options pre-populated.

  4. Click Next.
    1. Review the pre-populated settings. You might want to modify the Threshold value.
    2. Click Condition name and enter a name for the condition.
  5. Click Next.
  6. To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.

  7. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  8. Optional: Click Documentation, and then add any information that you want included in a notification message.
  9. Click Alert name and enter a name for the alerting policy.
  10. Click Create Policy.

Learn about this finding type's supported assets and scan settings.

Dataproc CMEK disabled

Category name in the API: DATAPROC_CMEK_DISABLED

A Dataproc cluster was created without an encryption configuration CMEK. With CMEK, keys that you create and manage in Cloud Key Management Service wrap the keys that Google Cloud uses to encrypt your data, giving you more control over access to your data.

To remediate this finding, complete the following steps:

  1. Go to the Dataproc cluster page in the Google Cloud console.

    Go to Dataproc clusters

  2. Select your project and click Create Cluster.

  3. In the Manage security section, click Encryption and the select Customer-managed key.

  4. Select a customer-managed key from the list.

    If you don't have a customer-managed key, then you need create one to use. For more information, see Customer-managed encryption keys.

  5. Ensure that the selected KMS key has the Cloud KMS CryptoKey Encrypter/Decrypter role assign to the Dataproc Cluster service account ("serviceAccount:service-project_number@compute-system.iam.gserviceaccount.com").

  6. After the cluster is created, migrate all of your workloads from the older cluster to the new cluster.

  7. Go to Dataproc clusters and select your project.

  8. Select the old cluster and click Delete cluster.

  9. Repeat all steps above for other Dataproc clusters available in the selected project.

Learn about this finding type's supported assets and scan settings.

Dataproc image outdated

Category name in the API: DATAPROC_IMAGE_OUTDATED

A Dataproc cluster was created using a Dataproc image version that is affected by security vulnerabilities in the Apache Log4j 2 utility (CVE-2021-44228 and CVE-2021-45046).

This detector finds vulnerabilities by checking if the softwareConfig.imageVersion field in the config property of a Cluster has any of the following affected versions:

  • Image versions earlier than 1.3.95.
  • Subminor image versions earlier than 1.4.77, 1.5.53, and 2.0.27.

The version number of a custom Dataproc image can be overridden manually. Consider the following scenarios:

  • One can modify the version of an affected custom image to make it appear to be unaffected. In this case, this detector doesn't emit a finding.
  • One can override the version of an unaffected custom image with one that is known to have the vulnerability. In this case, this detector emits a false positive finding. To suppress these false positive findings, you can mute them.

To remediate this finding, recreate and update the affected cluster.

Learn about this finding type's supported assets and scan settings.

Dataset CMEK disabled

Category name in the API: DATASET_CMEK_DISABLED

A BigQuery dataset is not configured to use a default customer-managed encryption key (CMEK).

With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google Cloud uses to encrypt your data, giving you more control over access to your data. For more information, see Protecting data with Cloud KMS keys.

To remediate this finding, complete the following steps:

You can't switch a table in place between default encryptions and CMEK encryption. To set a default CMEK key with which to encrypt all new tables in the dataset, follow the instructions to Set a dataset default key.

Setting a default key will not retroactively re-encrypt tables currently in the dataset with a new key. To use CMEK for existing data, do the following:

  1. Create a new dataset.
  2. Set a default CMEK key on the dataset you created.
  3. To copy tables to your CMEK-enabled dataset, follow the instructions for Copying a table.
  4. After copying data successfully, delete the original datasets.

Learn about this finding type's supported assets and scan settings.

Default network

Category name in the API: DEFAULT_NETWORK

The default network exists in a project.

Default networks have automatically created firewall rules and network configurations which might not be secure. For more information, see Default network.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the list of networks, click the name of the default network.

  3. In the VPC network details page, click Delete VPC Network.

  4. To create a new network with custom firewall rules, see Creating networks.

Learn about this finding type's supported assets and scan settings.

Default service account used

Category name in the API: DEFAULT_SERVICE_ACCOUNT_USED

A Compute Engine instance is configured to use the default service account.

The default Compute Engine service account has the Editor role on the project, which allows read and write access to most Google Cloud services. To defend against privilege escalations and unauthorized access, don't use the default Compute Engine service account. Instead, create a new service account and assign only the permissions needed by your instance. Read Access control for information on roles and permissions.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. Select the instance related to the Security Health Analytics finding.

  3. On the Instance details page that loads, click Stop.

  4. After the instance stops, click Edit.

  5. Under the Service Account section, select a service account other than the default Compute Engine service account. You might first need to create a new service account. Read Access control for information on IAM roles and permissions.

  6. Click Save. The new configuration appears on the Instance details page.

Learn about this finding type's supported assets and scan settings.

Disk CMEK disabled

Category name in the API: DISK_CMEK_DISABLED

Disks on this VM are not encrypted with customer-managed encryption keys (CMEK).

With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google Cloud uses to encrypt your data, giving you more control over access to your data. For more information, see Protecting Resources with Cloud KMS Keys.

To remediate this finding, complete the following steps:

  1. Go to the Compute Engine disks page in the Google Cloud console.

    Go to Compute Engine disks

  2. In the list of disks, click the name of the disk indicated in the finding.

  3. On the Manage disk page, click Delete.

  4. To create a new disk with CMEK enabled, see Encrypt a new persistent disk with your own keys. CMEK incurs additional costs related to Cloud KMS.

Learn about this finding type's supported assets and scan settings.

Disk CSEK disabled

Category name in the API: DISK_CSEK_DISABLED

Disks on this VM are not encrypted with Customer-Supplied Encryption Keys (CSEK). Disks for critical VMs should be encrypted with CSEK.

If you provide your own encryption keys, Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. For more information, see Customer-Supplied Encryption Keys. CSEK incurs additional costs related to Cloud KMS.

To remediate this finding, complete the following steps:

Delete and create disk

You can only encrypt new persistent disks with your own key. You cannot encrypt existing persistent disks with your own key.

  1. Go to the Compute Engine disks page in the Google Cloud console.

    Go to Compute Engine disks

  2. In the list of disks, click the name of the disk indicated in the finding.

  3. On the Manage disk page, click Delete.

  4. To create a new disk with CSEK enabled, see Encrypt disks with customer- supplied encryption keys.

  5. Complete the remaining steps to enable the detector.

Enable the detector

  1. Go to Security Command Center's Assets page in the Google Cloud console.

    Go to Assets

  2. In the Resource type section of the Quick filters panel, select compute.Disk.

    If you don't see compute.Disk, click View more, enter Disk in the search field, and then click Apply.

    The Results panel updates to show only instances of the compute.Disk resource type.

  3. In the Display name column, select the box next to the name of the disk you want to use with CSEK, and then click Set Security Marks.

  4. In the dialog, click Add Mark.

  5. In the key field, enter enforce_customer_supplied_disk_encryption_keys, and in the value field, enter true.

  6. Click Save.

Learn about this finding type's supported assets and scan settings.

DNS logging disabled

Category name in the API: DNS_LOGGING_DISABLED

Monitoring of Cloud DNS logs provides visibility to DNS names requested by the clients within the VPC network. These logs can be monitored for anomalous domain names and evaluated against threat intelligence. We recommend enabling DNS logging for VPC networks.

Depending on the quantity of information, Cloud DNS logging costs can be significant. To understand your usage of the service and its cost, see Pricing for Google Cloud Observability: Cloud Logging.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the list of networks, click the name of the VPC network.

  3. Create a new server policy (if one doesn't exist) or edit an existing policy:

    • If the network doesn't have a DNS server policy, complete the following steps:

      1. Click Edit.
      2. In the DNS server policy field, click Create a new server policy.
      3. Enter a name for the new server policy.
      4. Set Logs to On.
      5. Click Save.
    • If the network has a DNS server policy, complete the following steps:

      1. In the DNS server policy field, click the name of the DNS policy.
      2. Click Edit policy.
      3. Set Logs to On.
      4. Click Save.

Learn about this finding type's supported assets and scan settings.

DNSSEC disabled

Category name in the API: DNSSEC_DISABLED

Domain Name System Security Extensions (DNSSEC) is disabled for Cloud DNS zones.

DNSSEC validates DNS responses and mitigates risks, such as DNS hijacking and person-in-the-middle attacks, by cryptographically signing DNS records. You should enable DNSSEC. For more information, see DNS Security Extensions (DNSSEC) overview.

To remediate this finding, complete the following steps:

  1. Go to the Cloud DNS page in the Google Cloud console.

    Go to Cloud DNS networks

  2. Locate the row with the DNS zone indicated in the finding.

  3. Click the DNSSEC setting in the row and then, under DNSSEC, select On.

  4. Read the dialog that appears. If satisfied, click Enable.

Learn about this finding type's supported assets and scan settings.

Egress deny rule not set

Category name in the API: EGRESS_DENY_RULE_NOT_SET

An egress deny rule is not set on a firewall.

A firewall that denies all egress network traffic prevents any unwanted outbound network connections, except those connections other firewalls explicitly authorize. For more information, see Egress cases.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. Click Create Firewall Rule.

  3. Give the firewall a name and, optionally, a description.

  4. Under Direction of traffic, select Egress.

  5. Under Action on match, select Deny.

  6. In the Targets drop-down menu, select All instances in the network.

  7. In the Destination filter drop-down menu, select IP ranges, and then type 0.0.0.0/0 into the Destination IP ranges box.

  8. Under Protocols and ports, select Deny all.

  9. Click Disable Rule then, under Enforcement, select Enabled.

  10. Click Create.

Learn about this finding type's supported assets and scan settings.

Essential contacts not configured

Category name in the API: ESSENTIAL_CONTACTS_NOT_CONFIGURED

Your organization has not designated a person or group to receive notifications from Google Cloud about important events such as attacks, vulnerabilities, and data incidents within your Google Cloud organization. We recommend that you designate as an essential contact one or more persons or groups in your business organization.

To remediate this finding, complete the following steps:

  1. Go to the Essential Contacts page in the Google Cloud console.

    Go to Essential Contacts

  2. Make sure the organization appears in the resource selector at the top of the page. The resource selector tells you what project, folder, or organization you are currently managing contacts for.

  3. Click +Add contact. The Add a contact panel opens.

  4. In the Email and Confirm Email fields, enter the email address of the contact.

  5. From the Notification categories section, select the notification categories that you want the contact to receive communications for. Ensure that appropriate email addresses are configured for each of the following notification categories:

    1. Legal
    2. Security
    3. Suspension
    4. Technical
  6. Click Save. Learn about this finding type's supported assets and scan settings.

Firewall not monitored

Category name in the API: FIREWALL_NOT_MONITORED

Log metrics and alerts aren't configured to monitor VPC Network Firewall rule changes.

Monitoring firewall rules creation and update events gives you insight into network access changes, and can help you quickly detect suspicious activity. For more information, see Overview of logs-based metrics.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

Create metric

  1. Go to the Logs-based Metrics page in the Google Cloud console.

    Go to Logs-based Metrics

  2. Click Create Metric.

  3. Under Metric Type, select Counter.

  4. Under Details:

    1. Set a Log metric name.
    2. Add a description.
    3. Set Units to 1.
  5. Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:

      resource.type="gce_firewall_rule"
      AND (protoPayload.methodName:"compute.firewalls.insert"
      OR protoPayload.methodName:"compute.firewalls.patch"
      OR protoPayload.methodName:"compute.firewalls.delete")
    

  6. Click Create Metric. You see a confirmation.

Create Alert Policy

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log-based Metrics:

    Go to Log-based Metrics

  2. Under the User-defined metrics section, select the metric you created in the previous section.
  3. Click More , and then click Create alert from metric.

    The New condition dialog opens with the metric and data transformation options pre-populated.

  4. Click Next.
    1. Review the pre-populated settings. You might want to modify the Threshold value.
    2. Click Condition name and enter a name for the condition.
  5. Click Next.
  6. To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.

  7. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  8. Optional: Click Documentation, and then add any information that you want included in a notification message.
  9. Click Alert name and enter a name for the alerting policy.
  10. Click Create Policy.

Learn about this finding type's supported assets and scan settings.

Firewall rule logging disabled

Category name in the API: FIREWALL_RULE_LOGGING_DISABLED

Firewall rules logging is disabled.

Firewall rules logging lets you audit, verify, and analyze the effects of your firewall rules. It can be useful for auditing network access or providing early warning that the network is being used in an unapproved manner. The cost of logs can be significant. For more information on Firewall Rules Logging and its cost, see Using Firewall Rules Logging.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the desired firewall rule.

  3. Click Edit.

  4. Under Logs, select On.

  5. Click Save.

Learn about this finding type's supported assets and scan settings.

Flow logs disabled

Category name in the API: FLOW_LOGS_DISABLED

There is a VPC subnetwork that has flow logs disabled.

VPC Flow Logs record a sample of network flows sent from and received by VM instances. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. For more information about flow logs and their cost, see Using VPC Flow Logs.

To remediate this finding, complete the following steps:

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the list of networks, click the name of the desired network.

  3. On the VPC network details page, click the Subnets tab.

  4. In the list of subnets, click the name of the subnet indicated in the finding.

  5. On the Subnet details page, click Edit.

  6. Under Flow logs, select On.

Learn about this finding type's supported assets and scan settings.

Category name in the API: VPC_FLOW_LOGS_SETTINGS_NOT_RECOMMENDED

In the configuration of a subnet in a VPC network, the VPC Flow Logs service is either off or is not configured according to CIS Benchmark 1.3 recommendations. VPC Flow Logs records a sample of network flows sent from and received by VM instances which can be used to detect threats.

For more information about VPC Flow Logs and their cost, see Using VPC Flow Logs.

To remediate this finding, complete the following steps:

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the list of networks, click the name of the network.

  3. On the VPC network details page, click the Subnets tab.

  4. In the list of subnets, click the name of the subnet indicated in the finding.

  5. On the Subnet details page, click Edit.

  6. Under Flow logs, select On.

    1. Optionally, modify the configuration of the logs by clicking on the Configure logs button to expand the tab. The CIS Benchmarks recommend the following settings:
      1. Set the Aggregation Interval to 5 SEC.
      2. In the Additional fields checkbox, select the Include metadata option.
      3. Set the Sample rate to 100%.
      4. Click on the SAVE button.

Learn about this finding type's supported assets and scan settings.

Full API access

Category name in the API: FULL_API_ACCESS

A Compute Engine instance is configured to use the default service account with full access to all Google Cloud APIs.

An instance configured with the default service account scope, Allow full access to all Cloud APIs, might allow users to perform operations or API calls for which they don't have IAM permissions. For more information, see Compute Engine default service account.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, click the name of the instance in the finding.

  3. Click Stop if the instance is currently started.

  4. After the instance stops, click Edit.

  5. Under Service account, in the drop-down menu, select Compute Engine default service account.

  6. Under Access scopes section, ensure that Allow full access to all Cloud APIs is not selected.

  7. Click Save.

  8. Click Start to start the instance.

Learn about this finding type's supported assets and scan settings.

HTTP load balancer

Category name in the API: HTTP_LOAD_BALANCER

A Compute Engine instance uses a load balancer that is configured to use a target HTTP proxy instead of a target HTTPS proxy.

To protect the integrity of your data and prevent intruders from tampering with your communications, configure your HTTP(S) load balancers to allow only HTTPS traffic. For more information, see External HTTP(S) Load Balancing overview.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the Target proxies page in the Google Cloud console.

    Go to Target proxies

  2. In the list of target proxies, click the name of the target proxy in the finding.

  3. Click the link under the URL map.

  4. Click Edit.

  5. Click Frontend configuration.

  6. Delete all Frontend IP and port configurations that allow HTTP traffic and create new ones that allow HTTPS traffic.

Learn about this finding type's supported assets and scan settings.

Instance OS login disabled

Category name in the API: INSTANCE_OS_LOGIN_DISABLED

OS Login is disabled on this Compute Engine instance.

OS Login enables centralized SSH key management with IAM, and it disables metadata-based SSH key configuration on all instances in a project. Learn how to set up and configure OS Login.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, click the name of the instance in the finding.

  3. On the Instance details page that loads, click Stop.

  4. After the instance stops, click Edit.

  5. In the Custom metadata section, ensure that the item with the key enable-oslogin has the value TRUE.

  6. Click Save.

  7. Click Start to start the instance.

Learn about this finding type's supported assets and scan settings.

Integrity monitoring disabled

Category name in the API: INTEGRITY_MONITORING_DISABLED

Integrity monitoring is disabled on a GKE cluster.

Integrity monitoring lets you monitor and verify the runtime boot integrity of your shielded nodes using Monitoring. This lets you respond to integrity failures and prevent compromised nodes from being deployed into the cluster.

To remediate this finding, complete the following steps:

Once a node is provisioned, it can't be updated to enable integrity monitoring. You must create a new node pool with integrity monitoring enabled.

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Click on the name of the cluster in the finding.

  3. Click on Add Node Pool.

  4. Under the Security tab, ensure the Enable integrity monitoring is enabled.

  5. Click Create.

  6. To migrate your workloads from the existing non-conforming node pools to the new node pools, see Migrating workloads to different machine types.

  7. After your workloads have been moved, delete the original non-conforming node pool.

    1. On the Kubernetes cluster page, in the Node pools menu, click the name of the node pool you want to delete.
    2. Click Remove node pool.

Learn about this finding type's supported assets and scan settings.

Intranode visibility disabled

Category name in the API: INTRANODE_VISIBILITY_DISABLED

Intranode visibility is disabled for a GKE cluster.

Enabling intranode visibility makes your intranode Pod-to-Pod traffic visible to the networking fabric. With this feature, you can use VPC flow logging or other VPC features to monitor or control intranode traffic. To get logs, you need to enable VPC flow logs in the selected subnetwork. For more information, see Using VPC flow logs.

To remediate this finding, complete the following steps:

Once a node is provisioned, it can't be updated to enable integrity monitoring. You must create a new node pool with integrity monitoring enabled.

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. In the Networking section, click on the edit icon () in the Intranode visibility row.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  3. In the dialog, select Enable Intranode visibility.

  4. Click Save Changes.

Learn about this finding type's supported assets and scan settings.

IP alias disabled

Category name in the API: IP_ALIAS_DISABLED

A GKE cluster was created with alias IP ranges disabled.

When you enable alias IP ranges, GKE clusters allocate IP addresses from a known CIDR block, so your cluster is scalable and interacts better with Google Cloud products and entities. For more information, see Alias IP ranges overview.

To remediate this finding, complete the following steps:

You cannot migrate an existing cluster to use alias IPs. To create a new cluster with alias IPs enabled, do the following:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Click Create.

  3. From the navigation pane, under Cluster, click Networking.

  4. Under Advanced networking options, select Enable VPC-native traffic routing (uses alias IP).

  5. Click Create.

Learn about this finding type's supported assets and scan settings.

IP forwarding enabled

Category name in the API: IP_FORWARDING_ENABLED

IP forwarding is enabled on Compute Engine instances.

Prevent data loss or information disclosure by disabling IP forwarding of data packets for your VMs.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, check the box next to the name of the instance in the finding.

  3. Click Delete.

  4. Select Create Instance to create a new instance to replace the one you deleted.

  5. To ensure IP forwarding is disabled, click Management, disks, networking, SSH keys, and then click Networking.

  6. Under Network interfaces, click Edit.

  7. Under IP forwarding, in the drop-down menu, ensure that Off is selected.

  8. Specify any other instance parameters, and then click Create. For more information, see Creating and starting a VM instance.

Learn about this finding type's supported assets and scan settings.

KMS key not rotated

Category name in the API: KMS_KEY_NOT_ROTATED

Rotation isn't configured on a Cloud KMS encryption key.

Rotating your encryption keys regularly provides protection in case a key gets compromised and limits the number of encrypted messages available to cryptanalysis for a specific key version. For more information, see Key rotation.

To remediate this finding, complete the following steps:

  1. Go to the Cloud KMS keys page in the Google Cloud console.

    Go to Cloud KMS keys

  2. Click the name of the key ring indicated in the finding.

  3. Click the name of the key indicated in the finding.

  4. Click Edit Rotation Period.

  5. Set the rotation period to a maximum of 90 days.

  6. Click Save.

Learn about this finding type's supported assets and scan settings.

KMS project has owner

Category name in the API: KMS_PROJECT_HAS_OWNER

A user has roles/Owner permissions on a project that has cryptographic keys. For more information, see Permissions and roles.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the IAM page in the Google Cloud console.

    Go IAM page

  2. If necessary, select the project in the finding.

  3. For each principal assigned the Owner role:

    1. Click Edit.
    2. In the Edit permissions panel, next to the Owner role, click Delete.
    3. Click Save.

Learn about this finding type's supported assets and scan settings.

KMS public key

Category name in the API: KMS_PUBLIC_KEY

A Cloud KMS Cryptokey or Cloud KMS Key Ring is public and accessible to anyone on the internet. For more information, see Using IAM with Cloud KMS.

To remediate this finding, if it is related to a Cryptokey:

  1. Go to the Cryptographic Keys page in the Google Cloud console.

    Cryptographic Keys

  2. Under Name, select the key ring that contains the cryptographic key related to the Security Health Analytics finding.

  3. On the Key ring details page that loads, select the checkbox next to the cryptographic key.

  4. If the INFO PANEL is not displayed, click the SHOW INFO PANEL button.

  5. Use the filter box preceding Role / Principal to search principals for allUsers and allAuthenticatedUsers, and click Delete to remove access for these principals.

To remediate this finding, if it is related to a Key Ring:

  1. Go to the Cryptographic Keys page in the Google Cloud console.

    Cryptographic Keys

  2. Find the row with the key ring in the finding and select the checkbox.

  3. If the INFO PANEL is not displayed, click the SHOW INFO PANEL button.

  4. Use the filter box preceding Role / Principal to search principals for allUsers and allAuthenticatedUsers, and click Delete to remove access for these principals.

Learn about this finding type's supported assets and scan settings.

KMS role separation

Category name in the API: KMS_ROLE_SEPARATION

This finding isn't available for project-level activations.

One or more principals have multiple Cloud KMS permissions assigned. We recommend that no account simultaneously has Cloud KMS Admin along with other Cloud KMS permissions. For more information, see Permissions and roles.

To remediate this finding, complete the following steps:

  1. Go to the IAM page in the Google Cloud console.

    Go to IAM

  2. For each principal listed in the finding, do the following:

    1. Check whether the role was inherited from a folder or organization resource by looking at the Inheritance column. If the column contains a link to a parent resource, click on the link to go to the parent resource's IAM page.
    2. Click Edit next to a principal.
    3. To remove permissions, click Delete next to Cloud KMS Admin. If you want to remove all permissions for the principal, click Delete next to all other permissions.
  3. Click Save.

Learn about this finding type's supported assets and scan settings.

Legacy authorization enabled

Category name in the API: LEGACY_AUTHORIZATION_ENABLED

Legacy Authorization is enabled on GKE clusters.

In Kubernetes, role-based access control (RBAC) lets you define roles with rules containing a set of permissions, and grant permissions at the cluster and namespace level. This feature provides better security by ensuring that users only have access to specific resources. Consider disabling legacy attribute-based access control (ABAC).

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Select the cluster listed in the Security Health Analytics finding.

  3. Click Edit.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  4. On the Legacy Authorization drop-down list, select Disabled.

  5. Click Save.

Learn about this finding type's supported assets and scan settings.

Legacy metadata enabled

Category name in the API: LEGACY_METADATA_ENABLED

Legacy metadata is enabled on GKE clusters.

Compute Engine's instance metadata server exposes legacy /0.1/ and /v1beta1/ endpoints, which do not enforce metadata query headers. This is a feature in the /v1/ APIs that makes it more difficult for a potential attacker to retrieve instance metadata. Unless required, we recommend you disable these legacy /0.1/ and /v1beta1/ APIs.

For more information, see Disabling and transitioning from legacy metadata APIs.

To remediate this finding, complete the following steps:

You can only disable legacy metadata APIs when creating a new cluster or when adding a new node pool to an existing cluster. To update an existing cluster and disable legacy metadata APIs, see Migrating workloads to different machine types.

Learn about this finding type's supported assets and scan settings.

Legacy network

Category name in the API: LEGACY_NETWORK

A legacy network exists in a project.

Legacy networks are not recommended because many new Google Cloud security features are not supported in legacy networks. Instead, use VPC networks. For more information, see Legacy networks.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. To create a new non-legacy network, click Create Network.

  3. Return to the VPC networks page.

  4. In the list of networks, click legacy_network.

  5. In the VPC network details page, click Delete VPC Network.

Learn about this finding type's supported assets and scan settings.

Load balancer logging disabled

Category name in the API: LOAD_BALANCER_LOGGING_DISABLED

Logging is disabled for the backend service in a load balancer.

Enabling logging for a load balancer allows you to view HTTP(S) network traffic for your web applications. For more information, see Load balancer.

To remediate this finding, complete the following steps:

  1. Go to the Cloud Load Balancing page in the Google Cloud console.

    Go to Cloud Load Balancing

  2. Click the name of your load balancer.

  3. Click Edit .

  4. Click Backend configuration.

  5. In the Backend configuration page, click .

  6. In the Logging section, select Enable logging and choose the best sample rate for your project.

  7. To finish editing the backend service, click Update.

  8. To finish editing the load balancer, click Update.

Learn about this finding type's supported assets and scan settings.

Locked retention policy not set

Category name in the API: LOCKED_RETENTION_POLICY_NOT_SET

A locked retention policy is not set for logs.

A locked retention policy prevents logs from being overwritten and the log bucket from being deleted. For more information, see Bucket Lock.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the Storage Browser page in the Google Cloud console.

    Go to Storage Browser

  2. Select the bucket listed in the Security Health Analytics finding.

  3. On the Bucket details page, click the Retention tab.

  4. If a retention policy has not been set, click Set Retention Policy.

  5. Enter a retention period.

  6. Click Save. The retention policy is shown in the Retention tab.

  7. Click Lock to ensure the retention period is not shortened or removed.

Learn about this finding type's supported assets and scan settings.

Log not exported

Category name in the API: LOG_NOT_EXPORTED

A resource doesn't have an appropriate log sink configured.

Cloud Logging helps you quickly find the root cause of issues in your system and applications. However, most logs are only retained for 30 days by default. Export copies of all log entries to extend the storage period. For more information, see Overview of log exports.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the Log Router page in the Google Cloud console.

    Go to Log Router

  2. Click Create Sink.

  3. To ensure that all logs are exported, leave the inclusion and exclusion filters empty.

  4. Click Create Sink.

Learn about this finding type's supported assets and scan settings.

Master authorized networks disabled

Category name in the API: MASTER_AUTHORIZED_NETWORKS_DISABLED

Control Plane Authorized Networks is not enabled on GKE clusters.

Control Plane Authorized Networks improves security for your container cluster by blocking specified IP addresses from accessing your cluster's control plane. For more information, see Adding authorized networks for control plane access.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Select the cluster listed in the Security Health Analytics finding.

  3. Click Edit.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  4. On the Control Plane Authorized Networks drop-down list, select Enabled.

  5. Click Add authorized network.

  6. Specify the authorized networks you want to use.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

MFA not enforced

Category name in the API: MFA_NOT_ENFORCED

This finding isn't available for project-level activations.

Multi-factor authentication, specifically 2-Step Verification (2SV), is disabled for some users in your organization.

Multi-factor authentication is used to protect accounts from unauthorized access and is the most important tool for protecting your organization against compromised login credentials. For more information, see Protect your business with 2-Step Verification.

To remediate this finding, complete the following steps:

  1. Go to the Admin console page in the Google Cloud console.

    Go to Admin console

  2. Enforce 2-Step Verification for all organizational units.

Suppress findings of this type

To suppress finding of this type, define a mute rule that automatically mutes future findings of this type. For more information, see Mute findings in Security Command Center.

Although it is not a recommended way to suppress findings, you can also add dedicated security marks to assets so that Security Health Analytics detectors don't create security findings for those assets.

  • To prevent this finding from being activated again, add the security mark allow_mfa_not_enforced with a value of true to the asset.
  • To ignore potential violations for specific organizational units, add the excluded_orgunits security mark to the asset with a comma-separated list of organizational unit paths in the value field. For example, excluded_orgunits:/people/vendors/vendorA,/people/contractors/contractorA.

Learn about this finding type's supported assets and scan settings.

Network not monitored

Category name in the API: NETWORK_NOT_MONITORED

Log metrics and alerts aren't configured to monitor VPC network changes.

To detect incorrect or unauthorized changes to your network setup, monitor VPC network changes. For more information, see Overview of logs- based metrics.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

Create metric

  1. Go to the Logs-based Metrics page in the Google Cloud console.

    Go to Logs-based Metrics

  2. Click Create Metric.

  3. Under Metric Type, select Counter.

  4. Under Details:

    1. Set a Log metric name.
    2. Add a description.
    3. Set Units to 1.
  5. Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:

      resource.type="gce_network"
      AND (protoPayload.methodName:"compute.networks.insert"
      OR protoPayload.methodName:"compute.networks.patch"
      OR protoPayload.methodName:"compute.networks.delete"
      OR protoPayload.methodName:"compute.networks.removePeering"
      OR protoPayload.methodName:"compute.networks.addPeering")
    

  6. Click Create Metric. You see a confirmation.

Create Alert Policy

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log-based Metrics:

    Go to Log-based Metrics

  2. Under the User-defined metrics section, select the metric you created in the previous section.
  3. Click More , and then click Create alert from metric.

    The New condition dialog opens with the metric and data transformation options pre-populated.

  4. Click Next.
    1. Review the pre-populated settings. You might want to modify the Threshold value.
    2. Click Condition name and enter a name for the condition.
  5. Click Next.
  6. To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.

  7. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  8. Optional: Click Documentation, and then add any information that you want included in a notification message.
  9. Click Alert name and enter a name for the alerting policy.
  10. Click Create Policy.

Learn about this finding type's supported assets and scan settings.

Network policy disabled

Category name in the API: NETWORK_POLICY_DISABLED

Network policy is disabled on GKE clusters.

By default, pod to pod communication is open. Open communication allows pods to connect directly across nodes, with or without network address translation. A NetworkPolicy resource is like a pod-level firewall that restricts connections between pods, unless the NetworkPolicy resource explicitly allows the connection. Learn how to define a network policy.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Click the name of the cluster listed in the Security Health Analytics finding.

  3. Under Networking, in the row for Calico Kubernetes Network policy, click Edit.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  4. In the dialog, select Enable Calico Kubernetes network policy for control plane and Enable Calico Kubernetes network policy for nodes.

  5. Click Save Changes.

Learn about this finding type's supported assets and scan settings.

Nodepool boot CMEK disabled

Category name in the API: NODEPOOL_BOOT_CMEK_DISABLED

Boot disks in this node pool are not encrypted with customer-managed encryption keys (CMEK). CMEK allows the user to configure the default encryption keys for boot disks in a node pool.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. In the list of clusters, click the name of the cluster in the finding.

  3. Click the Nodes tab.

  4. For each default-pool node pool, click Delete .

  5. When prompted to confirm, click Delete.

  6. To create new node pools using CMEK, see Using customer-managed encryption keys (CMEK). CMEK incurs additional costs related to Cloud KMS.

Learn about this finding type's supported assets and scan settings.

Nodepool secure boot disabled

Category name in the API: NODEPOOL_SECURE_BOOT_DISABLED

Secure boot is disabled for a GKE cluster.

Enable Secure Boot for Shielded GKE Nodes to verify the digital signatures of node boot components. For more information, see Secure Boot.

To remediate this finding, complete the following steps:

Once a Node pool is provisioned, it can't be updated to enable Secure Boot. You must create a new Node pool with Secure Boot enabled.

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Click on the name of the cluster in the finding.

  3. Click on Add Node Pool.

  4. In the Node pools menu, do the following:

    1. Click the name of the new Node pool to expand the tab.
    2. Select Security, and then, under Shielded options, select Enable secure boot.
    3. Click Create.
    4. To migrate your workloads from the existing non-conforming node pools to the new node pools, see Migrating workloads to different machine types.
    5. After your workloads have been moved, delete the original non-conforming node pool.

Learn about this finding type's supported assets and scan settings.

Non org IAM member

Category name in the API: NON_ORG_IAM_MEMBER

A user outside of your organization or project has IAM permissions on a project or organization. Learn more about IAM permissions.

To remediate this finding, complete the following steps:

  1. Go to the IAM page in the Google Cloud console.

    Go to IAM

  2. Select the checkbox next to users outside your organization or project.

  3. Click Remove.

Learn about this finding type's supported assets and scan settings.

Object versioning disabled

Category name in the API: OBJECT_VERSIONING_DISABLED

Object versioning isn't enabled on a storage bucket where sinks are configured.

To support the retrieval of objects that are deleted or overwritten, Cloud Storage offers the Object Versioning feature. Enable Object Versioning to protect your Cloud Storage data from being overwritten or accidentally deleted. Learn how to Enable Object Versioning.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, use the gsutil versioning set on command with the appropriate value:

    gsutil versioning set on gs://finding.assetDisplayName

Replace finding.assetDisplayName with the name of the relevant bucket.

Learn about this finding type's supported assets and scan settings.

Open Cassandra port

Category name in the API: OPEN_CASSANDRA_PORT

Firewall rules that allow any IP address to connect to Cassandra ports might expose your Cassandra services to attackers. For more information, see VPC firewall rules overview.

The Cassandra service ports are:

  • TCP - 7000, 7001, 7199, 8888, 9042, 9160, 61620, 61621

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open ciscosecure websm port

Category name in the API: OPEN_CISCOSECURE_WEBSM_PORT

Firewall rules that allow any IP address to connect to CiscoSecure/WebSM ports might expose your CiscoSecure/WebSM services to attackers. For more information, see VPC firewall rules overview.

The CiscoSecure/WebSM service ports are:

  • TCP - 9090

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open directory services port

Category name in the API: OPEN_DIRECTORY_SERVICES_PORT

Firewall rules that allow any IP address to connect to Directory ports might expose your Directory services to attackers. For more information, see VPC firewall rules overview.

The Directory service ports are:

  • TCP - 445
  • UDP - 445

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open DNS port

Category name in the API: OPEN_DNS_PORT

Firewall rules that allow any IP address to connect to DNS ports might expose your DNS services to attackers. For more information, see VPC firewall rules overview.

The DNS service ports are:

  • TCP - 53
  • UDP - 53

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open Elasticsearch port

Category name in the API: OPEN_ELASTICSEARCH_PORT

Firewall rules that allow any IP address to connect to Elasticsearch ports might expose your Elasticsearch services to attackers. For more information, see VPC firewall rules overview.

The Elasticsearch service ports are:

  • TCP - 9200, 9300

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open firewall

Category name in the API: OPEN_FIREWALL

Firewall rules that allow connections from all IP addresses, like 0.0.0.0/0, or from all ports can unnecessarily expose resources to attacks from unintended sources. These rules should be removed or scoped explicitly to the intended source IP ranges or ports. For example, in applications intended to be public, consider restricting allowed ports to those needed for the application, like 80 and 443. If your application needs to allow connections from all IP addresses or ports, consider adding the asset to an allowlist. Learn more about Updating firewall rules.

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall rules page in the Google Cloud console.

    Go to Firewall rules

  2. Click the firewall rule listed in the Security Health Analytics finding, and then click Edit.

  3. Under Source IP ranges, edit the IP values to restrict the range of IPs that is allowed.

  4. Under Protocols and ports, select Specified protocols and ports, select the allowed protocols, and enter ports that are allowed.

  5. Click Save.

Learn about this finding type's supported assets and scan settings.

Open FTP port

Category name in the API: OPEN_FTP_PORT

Firewall rules that allow any IP address to connect to FTP ports might expose your FTP services to attackers. For more information, see VPC firewall rules overview.

The FTP service ports are:

  • TCP - 21

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open group IAM member

Category name in the API: OPEN_GROUP_IAM_MEMBER

One or more principals that have access to an organization, project, or folder are Google Groups accounts that can be joined without approval.

Google Cloud customers can use Google Groups to manage roles and permissions for members in their organizations, or apply access policies to collections of users. Instead of granting roles directly to members, administrators can grant roles and permissions to Google Groups, and then add members to specific groups. Group members inherit all of a group's roles and permissions, which lets members access specific resources and services.

If an open Google Groups account is used as a principal in an IAM binding, anyone can inherit the associated role just by joining the group directly or indirectly (through a subgroup). We recommend revoking the roles of the open groups or restricting access to those groups.

To remediate this finding, perform one of the following procedures.

Remove the group from the IAM policy

  1. Go to the IAM page in the Google Cloud console.

    Go IAM

  2. If necessary, select the project, folder, or organization in the finding.

  3. Revoke the role of each open group identified in the finding.

Restrict access to the open groups

  1. Sign in to Google Groups.
  2. Update the settings of each open group, and its subgroups, to specify who can join the group and who must approve them.

Learn about this finding type's supported assets and scan settings.

Open HTTP port

Category name in the API: OPEN_HTTP_PORT

Firewall rules that allow any IP address to connect to HTTP ports might expose your HTTP services to attackers. For more information, see VPC firewall rules overview.

The HTTP service ports are:

  • TCP - 80

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open LDAP port

Category name in the API: OPEN_LDAP_PORT

Firewall rules that allow any IP address to connect to LDAP ports might expose your LDAP services to attackers. For more information, see VPC firewall rules overview.

The LDAP service ports are:

  • TCP - 389, 636
  • UDP - 389

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open Memcached port

Category name in the API: OPEN_MEMCACHED_PORT

Firewall rules that allow any IP address to connect to Memcached ports might expose your Memcached services to attackers. For more information, see VPC firewall rules overview.

The Memcached service ports are:

  • TCP - 11211, 11214, 11215
  • UDP - 11211, 11214, 11215

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open MongoDB port

Category name in the API: OPEN_MONGODB_PORT

Firewall rules that allow any IP address to connect to MongoDB ports might expose your MongoDB services to attackers. For more information, see VPC firewall rules overview.

The MongoDB service ports are:

  • TCP - 27017, 27018, 27019

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open MySQL port

Category name in the API: OPEN_MYSQL_PORT

Firewall rules that allow any IP address to connect to MySQL ports might expose your MySQL services to attackers. For more information, see VPC firewall rules overview.

The MySQL service ports are:

  • TCP - 3306

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open NetBIOS port

Category name in the API: `OPEN_NETBIOS_PORT

Firewall rules that allow any IP address to connect to NetBIOS ports might expose your NetBIOS services to attackers. For more information, see VPC firewall rules overview.

The NetBIOS service ports are:

  • TCP - 137, 138, 139
  • UDP - 137, 138, 139

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open OracleDB port

Category name in the API: OPEN_ORACLEDB_PORT

Firewall rules that allow any IP address to connect to OracleDB ports might expose your OracleDB services to attackers. For more information, see VPC firewall rules overview.

The OracleDB service ports are:

  • TCP - 1521, 2483, 2484
  • UDP - 2483, 2484

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open POP3 port

Category name in the API: OPEN_POP3_PORT

Firewall rules that allow any IP address to connect to POP3 ports might expose your POP3 services to attackers. For more information, see VPC firewall rules overview.

The POP3 service ports are:

  • TCP - 110

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open PostgreSQL port

Category name in the API: OPEN_POSTGRESQL_PORT

Firewall rules that allow any IP address to connect to PostgreSQL ports might expose your PostgreSQL services to attackers. For more information, see VPC firewall rules overview.

The PostgreSQL service ports are:

  • TCP - 5432
  • UDP - 5432

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open RDP port

Category name in the API: OPEN_RDP_PORT

Firewall rules that allow any IP address to connect to RDP ports might expose your RDP services to attackers. For more information, see VPC firewall rules overview.

The RDP service ports are:

  • TCP - 3389
  • UDP - 3389

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open Redis port

Category name in the API: OPEN_REDIS_PORT

Firewall rules that allow any IP address to connect to Redis ports might expose your Redis services to attackers. For more information, see VPC firewall rules overview.

The Redis service ports are:

  • TCP - 6379

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open SMTP port

Category name in the API: OPEN_SMTP_PORT

Firewall rules that allow any IP address to connect to SMTP ports might expose your SMTP services to attackers. For more information, see VPC firewall rules overview.

The SMTP service ports are:

  • TCP - 25

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open SSH port

Category name in the API: OPEN_SSH_PORT

Firewall rules that allow any IP address to connect to SSH ports might expose your SSH services to attackers. For more information, see VPC firewall rules overview.

The SSH service ports are:

  • SCTP - 22
  • TCP - 22

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Open Telnet port

Category name in the API: OPEN_TELNET_PORT

Firewall rules that allow any IP address to connect to Telnet ports might expose your Telnet services to attackers. For more information, see VPC firewall rules overview.

The Telnet service ports are:

  • TCP - 23

This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.

To remediate this finding, complete the following steps:

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the list of firewall rules, click the name of the firewall rule in the finding.

  3. Click Edit.

  4. Under Source IP ranges, delete 0.0.0.0/0.

  5. Add specific IP addresses or IP ranges that you want to let connect to the instance.

  6. Add specific protocols and ports you want to open on your instance.

  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Org policy Confidential VM policy

Category name in the API: ORG_POLICY_CONFIDENTIAL_VM_POLICY

A Compute Engine resource is out of compliance with the constraints/compute.restrictNonConfidentialComputing organization policy. For more information about this org policy constraint, see Enforcing organization policy constraints.

Your organization requires this VM to have the Confidential VM service enabled. VMs that don't have this service enabled will not use runtime memory encryption, exposing them to runtime memory attacks.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, click the name of the instance in the finding.

  3. If the VM doesn't require the Confidential VM service, move it to a new folder or project.

  4. If the VM requires Confidential VM, click Delete.

  5. To create a new instance with Confidential VM enabled, see Quickstart: Creating a Confidential VM instance.

Learn about this finding type's supported assets and scan settings.

Org policy location restriction

Category name in the API: ORG_POLICY_LOCATION_RESTRICTION

The Organization Policy gcp.resourceLocations constraint lets you restrict the creation of new resources to Cloud Regions you select. For more information, see Restricting resource locations.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

The ORG_POLICY_LOCATION_RESTRICTION detector covers many resource types and remediation instructions are different for each resource. The general approach to remediate location violations includes the following:

  1. Copy, move, or back up the out-of-region resource or its data into a resource that is in-region. Read the documentation for individual services to get instructions on moving resources.
  2. Delete the original out-of-region resource or its data.

This approach is not possible for all resource types. For guidance, consult the customized recommendations that are provided in the finding.

Additional considerations

When remediating this finding, consider the following.

Managed resources

The lifecycles of resources are sometimes managed and controlled by other resources. For example, a managed Compute Engine instance group creates and destroys Compute Engine instances in accordance with the instance group's autoscaling policy. If managed and managing resources are in-scope for location enforcement, both might be flagged as violating the Organization Policy. Remediation of findings for managed resources should be done on the managing resource to ensure operational stability.

Resources in-use

Certain resources are used by other resources. For example, a Compute Engine disk that is attached to a running Compute Engine instance is considered to be in-use by the instance. If the resource in-use violates the location Organization Policy, you need to ensure that the resource is not in-use before addressing the location violation.

Learn about this finding type's supported assets and scan settings.

OS login disabled

Category name in the API: OS_LOGIN_DISABLED

OS Login is disabled on this Compute Engine instance.

OS Login enables centralized SSH key management with IAM, and it disables metadata-based SSH key configuration on all instances in a project. Learn how to set up and configure OS Login.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the Metadata page in the Google Cloud console.

    Go to Metadata

  2. Click Edit, and then click Add item.

  3. Add an item with key enable-oslogin and value TRUE.

Learn about this finding type's supported assets and scan settings.

Over privileged account

Category name in the API: OVER_PRIVILEGED_ACCOUNT

A GKE node is using the Compute Engine default service node, which has broad access by default and might be over-privileged for running your GKE cluster.

To remediate this finding, complete the following steps:

Follow the instructions to Use least privilege Google service accounts.

Learn about this finding type's supported assets and scan settings.

Over privileged scopes

Category name in the API: OVER_PRIVILEGED_SCOPES

A node service account has broad access scopes.

Access scopes are the legacy method of specifying permissions for your instance. To reduce the possibility of a privilege escalation in an attack, create and use a minimally privileged service account to run your GKE cluster.

To remediate this finding, follow the instructions to Use least privilege Google service accounts.

Learn about this finding type's supported assets and scan settings.

Over privileged service account user

Category name in the API: OVER_PRIVILEGED_SERVICE_ACCOUNT_USER

A user has the iam.serviceAccountUser or iam.serviceAccountTokenCreator roles at the project, folder, or organization level, instead of for a specific service account.

Granting those roles to a user for a project, folder, or organization gives the user access to all existing and future service accounts at that scope. This situation can result in unintended escalation of privileges. For more information, see Service account permissions.

To remediate this finding, complete the following steps:

  1. Go to the IAM page in the Google Cloud console.

    Go IAM page

  2. If necessary, select the project, folder, or organization in the finding.

  3. For each principal assigned roles/iam.serviceAccountUser or roles/iam.serviceAccountTokenCreator, do the following:

    1. Click Edit.
    2. In the Edit permissions panel, next to the roles, click Delete.
    3. Click Save.
  4. Follow this guide to grant individual users permission to impersonate a single service account. You need to follow the guide for each service account you want to allow chosen users to impersonate.

Learn about this finding type's supported assets and scan settings.

Owner not monitored

Category name in the API: OWNER_NOT_MONITORED

Log metrics and alerts aren't configured to monitor Project Ownership assignments or changes.

The IAM Owner role has the highest level of privilege on a project. To secure your resources, set up alerts to get notified when new owners are added or removed. For more information, see Overview of logs-based metrics.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

Create metric

  1. Go to the Logs-based Metrics page in the Google Cloud console.

    Go to Logs-based Metrics

  2. Click Create Metric.

  3. Under Metric Type, select Counter.

  4. Under Details:

    1. Set a Log metric name.
    2. Add a description.
    3. Set Units to 1.
  5. Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:

      (protoPayload.serviceName="cloudresourcemanager.googleapis.com")
      AND (ProjectOwnership OR projectOwnerInvitee)
      OR (protoPayload.serviceData.policyDelta.bindingDeltas.action="REMOVE"
      AND protoPayload.serviceData.policyDelta.bindingDeltas.role="roles/owner")
      OR (protoPayload.serviceData.policyDelta.bindingDeltas.action="ADD"
      AND protoPayload.serviceData.policyDelta.bindingDeltas.role="roles/owner")
    

  6. Click Create Metric. You see a confirmation.

Create Alert Policy

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log-based Metrics:

    Go to Log-based Metrics

  2. Under the User-defined metrics section, select the metric you created in the previous section.
  3. Click More , and then click Create alert from metric.

    The New condition dialog opens with the metric and data transformation options pre-populated.

  4. Click Next.
    1. Review the pre-populated settings. You might want to modify the Threshold value.
    2. Click Condition name and enter a name for the condition.
  5. Click Next.
  6. To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.

  7. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  8. Optional: Click Documentation, and then add any information that you want included in a notification message.
  9. Click Alert name and enter a name for the alerting policy.
  10. Click Create Policy.

Learn about this finding type's supported assets and scan settings.

Pod security policy disabled

Category name in the API: POD_SECURITY_POLICY_DISABLED

The PodSecurityPolicy is disabled on a GKE cluster.

A PodSecurityPolicy is an admission controller resource that validates requests to create and update pods on a cluster. Clusters won't accept pods that don't meet the conditions defined in the PodSecurityPolicy.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, define and authorize PodSecurityPolicies, and enable the PodSecurityPolicy controller. For instructions, see Using PodSecurityPolicies.

Learn about this finding type's supported assets and scan settings.

Primitive roles used

Category name in the API: PRIMITIVE_ROLES_USED

A user has one of the following IAM basic roles: roles/owner, roles/editor, or roles/viewer. These roles are too permissive and shouldn't be used. Instead, they should be assigned per project only.

For more information, see Understanding roles.

To remediate this finding, complete the following steps:

  1. Go to the IAM policy page in the Google Cloud console.

    Go to IAM policy

  2. For each user assigned a primitive role, consider using more granular roles instead.

Learn about this finding type's supported assets and scan settings.

Private cluster disabled

Category name in the API: PRIVATE_CLUSTER_DISABLED

A GKE cluster has a private cluster disabled.

Private clusters allow nodes to only have private IP addresses. This feature limits outbound internet access for nodes. If a cluster node doesn't have a public IP address, it isn't discoverable or exposed to the public internet. You can still route traffic to a node by using an internal load balancer. For more information, see Private clusters.

You can't make an existing cluster private. To remediate this finding, create a new private cluster:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Click Create Cluster.

  3. In the navigation menu, under Cluster, select Networking.

  4. Select the radio button for Private cluster.

  5. Under Advanced networking options, select the checkbox for Enable VPC-native traffic routing (uses alias IP).

  6. Click Create.

Learn about this finding type's supported assets and scan settings.

Private Google access disabled

Category name in the API: PRIVATE_GOOGLE_ACCESS_DISABLED

There are private subnets without access to Google public APIs.

Private Google Access enables VM instances with only internal (private) IP addresses to reach the public IP addresses of Google APIs and services.

For more information, see Configuring Google Private Access.

To remediate this finding, complete the following steps:

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the list of networks, click the name of the desired network.

  3. On the VPC network details page, click the Subnets tab.

  4. In the list of subnets, click the name of the subnet associated with the Kubernetes cluster in the finding.

  5. On the Subnet details page, click Edit .

  6. Under Private Google Access, select On.

  7. Click Save.

  8. To remove public (external) IPs from VM instances whose only external traffic is to Google APIs, see Unassigning a static external IP address.

Learn about this finding type's supported assets and scan settings.

Public bucket ACL

Category name in the API: PUBLIC_BUCKET_ACL

A bucket is public and anyone on the internet can access it.

For more information, see Overview of access control.

To remediate this finding, complete the following steps:

  1. Go to the Storage Browser page in the Google Cloud console.

    Go to Storage Browser

  2. Select the bucket listed in the Security Health Analytics finding.

  3. On the Bucket details page, click the Permissions tab.

  4. Next to View by, click Roles.

  5. In the Filter box, search for allUsers and allAuthenticatedUsers.

  6. Click Delete to remove all IAM permissions granted to allUsers and allAuthenticatedUsers.

Learn about this finding type's supported assets and scan settings.

Public Compute image

Category name in the API: PUBLIC_COMPUTE_IMAGE

A Compute Engine image is public and anyone on the internet can access it. allUsers represents anyone on the internet and allAuthenticatedUsers represents anyone who is authenticated with a Google account; neither is constrained to users within your organization.

Compute Engine images might contain sensitive information like encryption keys or licensed software. Such sensitive information should not be publicly accessible. If you intended to make this Compute Engine image public, ensure that it does not contain any sensitive information.

For more information, see Access control overview.

To remediate this finding, complete the following steps:

  1. Go to the Compute Engine images page in the Google Cloud console.

    Go to Compute Engine images

  2. Select the box next to the public-image image, and then click Show Info Panel.

  3. In the Filter box, search principals for allUsers and allAuthenticatedUsers.

  4. Expand the role for which you want to remove users.

  5. Click Delete to remove a user from that role.

Learn about this finding type's supported assets and scan settings.

Public dataset

Category name in the API: PUBLIC_DATASET

A BigQuery dataset is public and accessible to anyone on the internet. The IAM principal allUsers represents anyone on the internet and allAuthenticatedUsers represents anyone who is logged into a Google service; neither is constrained to users within your organization.

For more information, see Controlling access to datasets.

To remediate this finding, complete the following steps:

  1. Go to the BigQuery Explorer page in the Google Cloud console.

    Go to BigQuery Dataset

  2. In the list of datasets, click on the name of the dataset that is identified in the finding. The Dataset info panel opens.

  3. Near the top of the Dataset info panel, click SHARING.

  4. In the drop-down menu, click on Permissions.

  5. In the Dataset Permissions panel, enter allUsers and allAuthenticatedUsers, and remove access for these principals.

Learn about this finding type's supported assets and scan settings.

Public IP address

Category name in the API: PUBLIC_IP_ADDRESS

A Compute Engine instance has a public IP address.

To reduce your organizations' attack surface, avoid assigning public IP addresses to your VMs. Stopped instances might still be flagged with a Public IP finding, for example, if the network interfaces are configured to assign an ephemeral public IP on start. Ensure the network configurations for stopped instances do not include external access.

For more information, see Securely connecting to VM instances.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. In the list of instances, check the box next to the name of the instance in the finding.

  3. Click Edit.

  4. For each interface under Network interfaces, click Edit and set External IP to None.

  5. Click Done, and then click Save.

Learn about this finding type's supported assets and scan settings.

Public log bucket

Category name in the API: PUBLIC_LOG_BUCKET

This finding isn't available for project-level activations.

A storage bucket is public and used as a log sink, meaning that anyone on the internet can access logs stored in this bucket. allUsers represents anyone on the internet and allAuthenticatedUsers represents anyone who is logged into a Google service; neither is constrained to users within your organization.

For more information, see Overview of access control.

To remediate this finding, complete the following steps:

  1. Go to the Cloud Storage browser page in the Google Cloud console.

    Go to Cloud Storage browser

  2. In the list of buckets, click the name of the bucket indicated in the finding.

  3. Click the Permissions tab.

  4. Remove allUsers and allAuthenticatedUsers from the list of principals.

Learn about this finding type's supported assets and scan settings.

Public SQL instance

Category name in the API: PUBLIC_SQL_INSTANCE

Your SQL instance has 0.0.0.0/0 as an allowed network. This occurrence means that any IPv4 client can pass the network firewall and make login attempts to your instance, including clients you might not have intended to allow. Clients still need valid credentials to successfully log in to your instance.

For more information, see Authorizing with authorized networks.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the navigation panel, click Connections.

  5. Under Authorized networks, delete 0.0.0.0/0 and add specific IP addresses or IP ranges that you want to let connect to your instance.

  6. Click Done, and then click Save.

Learn about this finding type's supported assets and scan settings.

Pubsub CMEK disabled

Category name in the API: PUBSUB_CMEK_DISABLED

A Pub/Sub topic is not encrypted with customer-managed encryption keys (CMEK).

With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google uses to encrypt your data, giving you more control over access to your data.

To remediate this finding, delete the existing topic and create a new one:

  1. Go to Pub/Sub's Topics page in the Google Cloud console.

    Go to Topics

  2. If necessary, select the project containing the Pub/Sub topic.

  3. Select the checkbox next to the topic listed in the finding, and then click Delete.

  4. To create a new Pub/Sub topic with CMEK enabled, see Using customer-managed encryption keys. CMEK incurs additional costs related to Cloud KMS.

  5. Publish findings or other data to the CMEK-enabled Pub/Sub topic.

Learn about this finding type's supported assets and scan settings.

Route not monitored

Category name in the API: ROUTE_NOT_MONITORED

Log metrics and alerts aren't configured to monitor VPC network route changes.

Google Cloud routes are destinations and hops that define the path that network traffic takes from a VM instance to a destination IP. By monitoring changes to route tables, you can help ensure that all VPC traffic flows through an expected path.

For more information, see Overview of logs-based metrics.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

Create metric

  1. Go to the Logs-based Metrics page in the Google Cloud console.

    Go to Logs-based Metrics

  2. Click Create Metric.

  3. Under Metric Type, select Counter.

  4. Under Details:

    1. Set a Log metric name.
    2. Add a description.
    3. Set Units to 1.
  5. Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:

      resource.type="gce_route"
      AND (protoPayload.methodName:"compute.routes.delete"
      OR protoPayload.methodName:"compute.routes.insert")
    

  6. Click Create Metric. You see a confirmation.

Create Alert Policy

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log-based Metrics:

    Go to Log-based Metrics

  2. Under the User-defined metrics section, select the metric you created in the previous section.
  3. Click More , and then click Create alert from metric.

    The New condition dialog opens with the metric and data transformation options pre-populated.

  4. Click Next.
    1. Review the pre-populated settings. You might want to modify the Threshold value.
    2. Click Condition name and enter a name for the condition.
  5. Click Next.
  6. To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.

  7. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  8. Optional: Click Documentation, and then add any information that you want included in a notification message.
  9. Click Alert name and enter a name for the alerting policy.
  10. Click Create Policy.

Learn about this finding type's supported assets and scan settings.

Redis role used on org

Category name in the API: REDIS_ROLE_USED_ON_ORG

This finding isn't available for project-level activations.

A Redis IAM role is assigned at the organization or folder level.

The following Redis IAM roles should be assigned per project only, not at the organization or folder level:

  • roles/redis.admin
  • roles/redis.viewer
  • roles/redis.editor

For more information, see Access control and permissions.

To remediate this finding, complete the following steps:

  1. Go to the IAM policy page in the Google Cloud console.

    Go to IAM policy

  2. Remove the Redis IAM roles indicated in the finding and add them on the individual projects instead.

Learn about this finding type's supported assets and scan settings.

Release channel disabled

Category name in the API: RELEASE_CHANNEL_DISABLED

A GKE cluster is not subscribed to a release channel.

Subscribe to a release channel to automate version upgrades to the GKE cluster. The features also reduces version management complexity to the number of features and level of stability required. For more information, see Release channels.

To remediate this finding, complete the following steps:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. In the Cluster basics section, click on the edit icon () in the Release channel row.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  3. In the dialog, select Release channel, and then choose the release channel you want to subscribe to.

    If the control plane version of your cluster is not upgradeable to a release channel, that channel might be disabled as an option.

  4. Click Save Changes.

Learn about this finding type's supported assets and scan settings.

RSASHA1 for signing

Category name in the API: RSASHA1_FOR_SIGNING

RSASHA1 is used for key signing in Cloud DNS zones. The algorithm used for key signing should not be weak.

To remediate this finding, replace the algorithm with a recommended one by following the Using advanced signing options guide.

Learn about this finding type's supported assets and scan settings.

Service account key not rotated

Category name in the API: SERVICE_ACCOUNT_KEY_NOT_ROTATED

This finding isn't available for project-level activations.

A user-managed service account key hasn't been rotated for more than 90 days.

In general, user-managed service account keys should be rotated at least every 90 days, to ensure that data cannot be accessed with an old key that might have been lost, compromised, or stolen. For more information, see Rotate service account keys to reduce security risk caused by leaked keys.

If you generated the public/private key pair yourself, stored the private key in a hardware security module (HSM), and uploaded the public key to Google, then you might not need to rotate the key every 90 days. Instead, you can rotate the key if you believe that it might have been compromised.

To remediate this finding, complete the following steps:

  1. Go to the Service Accounts page in the Google Cloud console.

    Go to Service Accounts

  2. If necessary, select the project indicated in the finding.

  3. In the list of service accounts, find the service account listed in the finding and click Delete. Before proceeding, consider the impact deleting a service account could have on your production resources.

  4. Create a new service account key to replace the old one. For more information, see Creating service account keys.

Learn about this finding type's supported assets and scan settings.

Service account role separation

Category name in the API: SERVICE_ACCOUNT_ROLE_SEPARATION

This finding isn't available for project-level activations.

One or more principals in your organization have multiple service account permissions assigned. No account should simultaneously have Service Account Admin along with other service account permissions. To learn about service accounts and the roles available to them, see Service accounts.

To remediate this finding, complete the following steps:

  1. Go to the IAM page in the Google Cloud console.

    Go to IAM

  2. For each principal listed in the finding, do the following:

    1. Check whether the role was inherited from a folder or organization resource by looking at the Inheritance column. If the column contains a link to a parent resource, click on the link to go to the parent resource's IAM page.
    2. Click Edit next to a principal.
    3. To remove permissions, click Delete next to Service Account Admin. If you want to remove all service account permissions, click Delete next to all other permissions.
  3. Click Save.

Learn about this finding type's supported assets and scan settings.

Shielded VM disabled

Category name in the API: SHIELDED_VM_DISABLED

Shielded VM is disabled on this Compute Engine instance.

Shielded VM's are virtual machines (VMs) on Google Cloud hardened by a set of security controls that help defend against rootkits and bootkits. Shielded VM's help ensure that the boot loader and firmware are signed and verified. Learn more about Shielded VM.

To remediate this finding, complete the following steps:

  1. Go to the VM instances page in the Google Cloud console.

    Go to VM instances

  2. Select the instance related to the Security Health Analytics finding.

  3. On the Instance details page that loads, click Stop.

  4. After the instance stops, click Edit.

  5. In the Shielded VM section, toggle Turn on vTPM and Turn on Integrity Monitoring to enable Shielded VM.

  6. Optionally, if you do not use any custom or unsigned drivers, then also enable Secure Boot.

  7. Click Save. The new configuration appears on the Instance details page.

  8. Click Start to start the instance.

Learn about this finding type's supported assets and scan settings.

SQL CMEK disabled

Category name in the API: SQL_CMEK_DISABLED

A SQL database instance is not using customer-managed encryption keys (CMEK).

With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google uses to encrypt your data, giving you more control over access to your data. For more information, see CMEK overviews for your product: Cloud SQL for MySQL, Cloud SQL for PostgreSQL, or Cloud SQL for SQL Server. CMEK incurs additional costs related to Cloud KMS.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Delete.

  4. To create a new instance with CMEK enabled, follow the instructions to configure CMEK for your product:

    1. Cloud SQL for MySQL
    2. Cloud SQL for PostgreSQL
    3. Cloud SQL for SQL Server

Learn about this finding type's supported assets and scan settings.

SQL contained database authentication

Category name in the API: SQL_CONTAINED_DATABASE_AUTHENTICATION

A Cloud SQL for SQL Server database instance does not have the contained database authentication database flag set to Off.

The contained database authentication flag controls whether you can create or attach contained databases to the Database Engine. A contained database includes all database settings and metadata required to define the database and has no configuration dependencies on the instance of the Database Engine where the database is installed.

Enabling this flag is not recommended because of the following:

  • Users can connect to the database without authentication at the Database Engine level.
  • Isolating the database from the Database Engine makes it possible to move the database to another instance of SQL Server.

Contained databases face unique threats that should be understood and mitigated by SQL Server Database Engine administrators. Most threats result from the USER WITH PASSWORD authentication process, which moves the authentication boundary from the Database Engine level to the database level.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the contained database authentication database flag with the value Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL cross DB ownership chaining

Category name in the API: SQL_CROSS_DB_OWNERSHIP_CHAINING

A Cloud SQL for SQL Server database instance does not have the cross db ownership chaining database flag set to Off.

The cross db ownership chaining flag lets you control cross-database ownership chaining at the database level or allow cross-database ownership chaining for all database statements.

Enabling this flag is not recommended unless all databases hosted by the SQL Server instance participate in cross-database ownership chaining and you are aware of the security implications of this setting.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the cross db ownership chaining database flag with the value Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL external scripts enabled

Category name in the API: SQL_EXTERNAL_SCRIPTS_ENABLED

A Cloud SQL for SQL Server database instance does not have the external scripts enabled database flag set to Off.

When activated, this setting enables the execution of scripts with certain remote language extensions. Since this feature can adversely affect the security of the system, we recommend disabling it.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the Database flags section, set the external scripts enabled database flag with the value Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL instance not monitored

Category name in the API: SQL_INSTANCE_NOT_MONITORED

This finding isn't available for project-level activations.

Log metrics and alerts aren't configured to monitor Cloud SQL instance configuration changes.

Misconfiguration of SQL instance options can cause security risks. Disabling auto backup and high availability options could impact business continuity and not restricting authorized networks could increase exposure to untrusted networks. Monitoring changes to SQL instance configuration helps you reduce the time it takes to detect and correct misconfigurations.

For more information, see Overview of logs- based metrics.

Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Cost optimization for Google Cloud Observability.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

Create metric

  1. Go to the Logs-based Metrics page in the Google Cloud console.

    Go to Logs-based Metrics

  2. Click Create Metric.

  3. Under Metric Type, select Counter.

  4. Under Details:

    1. Set a Log metric name.
    2. Add a description.
    3. Set Units to 1.
  5. Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:

      protoPayload.methodName="cloudsql.instances.update"
      OR protoPayload.methodName="cloudsql.instances.create"
      OR protoPayload.methodName="cloudsql.instances.delete"
    

  6. Click Create Metric. You see a confirmation.

Create Alert Policy

  1. In the navigation panel of the Google Cloud console, select Logging, and then select Log-based Metrics:

    Go to Log-based Metrics

  2. Under the User-defined metrics section, select the metric you created in the previous section.
  3. Click More , and then click Create alert from metric.

    The New condition dialog opens with the metric and data transformation options pre-populated.

  4. Click Next.
    1. Review the pre-populated settings. You might want to modify the Threshold value.
    2. Click Condition name and enter a name for the condition.
  5. Click Next.
  6. To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.

  7. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  8. Optional: Click Documentation, and then add any information that you want included in a notification message.
  9. Click Alert name and enter a name for the alerting policy.
  10. Click Create Policy.

Learn about this finding type's supported assets and scan settings.

SQL local infile

Category name in the API: SQL_LOCAL_INFILE

A Cloud SQL for MySQL database instance does not have the local_infile database flag set to Off. Due to security issues associated with the local_infile flag, it should be disabled. For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the local_infile database flag with the value Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log checkpoints disabled

Category name in the API: SQL_LOG_CHECKPOINTS_DISABLED

A Cloud SQL for PostgreSQL database instance does not have the log_checkpoints database flag set to On.

Enabling log_checkpoints causes checkpoints and restart points to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_checkpoints database flag with the value On.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log connections disabled

Category name in the API: SQL_LOG_CONNECTIONS_DISABLED

A Cloud SQL for PostgreSQL database instance does not have the log_connections database flag set to On.

Enabling the log_connections setting causes attempted connections to the server to be logged, along with successful completion of client authentication. The logs can be useful in troubleshooting issues and confirming unusual connection attempts to the server.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_connections database flag with the value On.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log disconnections disabled

Category name in the API: SQL_LOG_DISCONNECTIONS_DISABLED

A Cloud SQL for PostgreSQL database instance does not have the log_disconnections database flag set to On.

Enabling the log_disconnections setting creates log entries at the end of each session. The logs are useful in troubleshooting issues and confirming unusual activity across a time period. For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_disconnections database flag with the value On.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log duration disabled

Category name in the API: SQL_LOG_DURATION_DISABLED

A Cloud SQL for PostgreSQL database instance does not have the log_duration database flag set to On.

When log_duration is enabled, this setting causes the execution time and duration of each completed statement to be logged. Monitoring the amount of time it takes to execute queries can be crucial in identifying slow queries and troubleshooting database issues.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_duration database flag to On.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log error verbosity

Category name in the API: SQL_LOG_ERROR_VERBOSITY

A Cloud SQL for PostgreSQL database instance does not have the log_error_verbosity database flag set to default or verbose.

The log_error_verbosity flag controls the amount of detail in messages logged. The greater the verbosity, the more details are recorded in messages. We recommend setting this flag to default or verbose.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the Database flags section, set the log_error_verbosity database flag to default or verbose.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log lock waits disabled

Category name in the API: SQL_LOG_LOCK_WAITS_DISABLED

A Cloud SQL for PostgreSQL database instance does not have the log_lock_waits database flag set to On.

Enabling the log_lock_waits setting creates log entries when session waits take longer than the deadlock_timeout time to acquire a lock. The logs are useful in determining whether lock waits are causing poor performance.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_lock_waits database flag with the value On.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log min duration statement enabled

Category name in the API: SQL_LOG_MIN_DURATION_STATEMENT_ENABLED

A Cloud SQL for PostgreSQL database instance does not have the log_min_duration_statement database flag set to -1.

The log_min_duration_statement flag causes SQL statements that run longer than a specified time to be logged. Consider disabling this setting because SQL statements might contain sensitive information that should not be logged. For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_min_duration_statement database flag with the value -1.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log min error statement

Category name in the API: SQL_LOG_MIN_ERROR_STATEMENT

A Cloud SQL for PostgreSQL database instance does not have the log_min_error_statement database flag set appropriately.

The log_min_error_statement flag controls whether SQL statements that cause error conditions are recorded in server logs. SQL statements of the specified severity or higher are logged with messages for the error statements. The greater the severity, the fewer messages are recorded.

If log_min_error_statement is not set to the correct value, messages might not be classified as error messages. A severity set too low might increase the number of messages and make it difficult to find actual errors. A severity set too high might cause error messages for actual errors to not be logged.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_min_error_statement database flag with one of the following recommended values, according to your organization's logging policy.

    • debug5
    • debug4
    • debug3
    • debug2
    • debug1
    • info
    • notice
    • warning
    • error
  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log min error statement severity

Category name in the API: SQL_LOG_MIN_ERROR_STATEMENT_SEVERITY

A Cloud SQL for PostgreSQL database instance does not have the log_min_error_statement database flag set appropriately.

The log_min_error_statement flag controls whether SQL statements that cause error conditions are recorded in server logs. SQL statements of the specified severity or stricter are logged with messages for the error statements. The stricter the severity, the fewer messages are recorded.

If log_min_error_statement is not set to the correct value, messages might not be classified as error messages. A severity set too low would increase the number of messages and make it difficult to find actual errors. A severity level that is too high (too strict) might cause error messages for actual errors to not be logged.

We recommend setting this flag to error or stricter.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the Database flags section, set the log_min_error_statement database flag with one of the following recommended values, according to your organization's logging policy.

    • error
    • log
    • fatal
    • panic
  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log min messages

Category name in the API: SQL_LOG_MIN_MESSAGES

A Cloud SQL for PostgreSQL database instance does not have the log_min_messages database flag set to at minimum warning.

The log_min_messages flag controls which message levels are recorded in server logs. The higher the severity, the fewer messages are recorded. Setting the threshold too low can result in increased log storage size and length, making it difficult to find actual errors.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_min_messages database flag with one of the following recommended values, according to your organization's logging policy.

    • debug5
    • debug4
    • debug3
    • debug2
    • debug1
    • info
    • notice
    • warning
  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log executor stats enabled

Category name in the API: SQL_LOG_EXECUTOR_STATS_ENABLED

A Cloud SQL for PostgreSQL database instance does not have the log_executor_stats database flag set to Off.

When the log_executor_stats flag is activated, executor performance statistics are included in the PostgreSQL logs for each query. This setting can be useful for troubleshooting, but it can significantly increase the number of logs and performance overhead.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_executor_stats database flag to Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log hostname enabled

Category name in the API: `SQL_LOG_HOSTNAME_ENABLED

A Cloud SQL for PostgreSQL database instance does not have the log_hostname database flag set to Off.

When the log_hostname flag is activated, the hostname of the connecting host is logged. By default, connection log messages only show the IP address. This setting can be useful for troubleshooting. However, it can incur overhead on server performance, because for each statement logged, DNS resolution is required to convert an IP address to a hostname.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_hostname database flag to Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log parser stats enabled

Category name in the API: SQL_LOG_PARSER_STATS_ENABLED

A Cloud SQL for PostgreSQL database instance does not have the log_parser_stats database flag set to Off.

When the log_parser_stats flag is activated, parser performance statistics are included in the PostgreSQL logs for each query. This can be useful for troubleshooting, but it can significantly increase the number of logs and performance overhead.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_parser_stats database flag to Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log planner stats enabled

Category name in the API: SQL_LOG_PLANNER_STATS_ENABLED

A Cloud SQL for PostgreSQL database instance does not have the log_planner_stats database flag set to Off.

When the log_planner_stats flag is activated, a crude profiling method for logging PostgreSQL planner performance statistics is used. This can be useful for troubleshooting, but it can significantly increase the number of logs and performance overhead.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_planner_stats database flag to Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log statement

Category name in the API: SQL_LOG_STATEMENT

A Cloud SQL for PostgreSQL database instance does not have the log_statement database flag set to ddl.

The value of this flag controls which SQL statements are logged. Logging helps troubleshoot operational problems and permits forensic analysis. If this flag isn't set to the correct value, relevant information might be skipped or might be hidden in too many messages. A value of ddl (all data definition statements) is recommended unless otherwise directed by your organization's logging policy.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_statement database flag to ddl.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log statement stats enabled

Category name in the API: SQL_LOG_STATEMENT_STATS_ENABLED

A Cloud SQL for PostgreSQL database instance does not have the log_statement_stats database flag set to Off.

When the log_statement_stats flag is activated, end-to-end performance statistics are included in the PostgreSQL logs for each query. This setting can be useful for troubleshooting, but it can significantly increase the number of logs and performance overhead.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_statement_stats database flag to Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL log temp files

Category name in the API: SQL_LOG_TEMP_FILES

A Cloud SQL for PostgreSQL database instance does not have the log_temp_files database flag set to 0.

Temporary files can be created for sorts, hashes, and temporary query results. Setting the log_temp_files flag to 0 causes all temporary files information to be logged. Logging all temporary files is useful for identifying potential performance issues. For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. Under the Database flags section, set the log_temp_files database flag with the value 0.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL no root password

Category name in the API: SQL_NO_ROOT_PASSWORD

A MySQL database instance does not have a password set for the root account. You should add a password to the MySQL database instance. For more information, see MySQL users.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. On the Instance details page that loads, select the Users tab.

  4. Next to the root user, click More , and then select Change Password.

  5. Enter a new, strong password, and then click OK.

Learn about this finding type's supported assets and scan settings.

SQL public IP

Category name in the API: SQL_PUBLIC_IP

A Cloud SQL database has a public IP address.

To reduce your organization's attack surface, Cloud SQL databases should not have public IP addresses. Private IP addresses provide improved network security and lower latency for your application.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. On the left-side menu, click on Connections.

  4. Click on the Networking tab and clear the Public IP check box.

  5. If the instance is not already configured to use a private IP, see Configuring private IP for an existing instance.

  6. Click Save.

Learn about this finding type's supported assets and scan settings.

SQL remote access enabled

Category name in the API: SQL_REMOTE_ACCESS_ENABLED

A Cloud SQL for SQL Server database instance doesn't have the remote access database flag set to Off.

When activated, this setting grants permission to run local stored procedures from remote servers or remote stored procedures from the local server. This functionality can be abused to launch a Denial-of-Service (DoS) attack on remote servers by offloading query processing to a target. To prevent abuse, we recommend disabling this setting.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the Flags section, set remote access to Off.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL skip show database disabled

Category name in the API: SQL_SKIP_SHOW_DATABASE_DISABLED

A Cloud SQL for MySQL database instance does not have the skip_show_database database flag set to On.

When activated, this flag prevents users from using the SHOW DATABASES statement if they don't have the SHOW DATABASES privilege. With this setting, users without explicit permission aren't able to see databases that belong to other users. We recommend enabling this flag.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the Flags section, set skip_show_database to On.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL trace flag 3625

Category name in the API: SQL_TRACE_FLAG_3625

A Cloud SQL for SQL Server database instance doesn't have the 3625 (trace flag) database flag set to On.

This flag limits the amount of information returned to users who are not members of the sysadmin fixed server role, by masking the parameters of some error messages using asterisks (******). To help prevent the disclosure of sensitive information, we recommend enabling this flag.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the Database flags section, set 3625 to On.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL user connections configured

Category name in the API: SQL_USER_CONNECTIONS_CONFIGURED

A Cloud SQL for SQL Server database instance has the user connections database flag configured.

The user connections option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server. Because it's a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable. The default value is 0, which means that up to 32,767 user connections are allowed. For this reason, we don't recommend configuring the user connections database flag.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the Database flags section, next to user connections, click Delete.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL user options configured

Category name in the API: SQL_USER_OPTIONS_CONFIGURED

A Cloud SQL for SQL Server database instance has the user options database flag configured.

This setting overrides global default values of the SET options for all users. Since users and applications might assume the default database SET options are in use, setting the user options might cause unexpected results. For this reason, we don't recommend configuring the user options database flag.

For more information, see Configuring database flags.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. Click Edit.

  4. In the Database flags section, next to user options, click Delete.

  5. Click Save. The new configuration appears on the Instance overview page.

Learn about this finding type's supported assets and scan settings.

SQL weak root password

Category name in the API: SQL_WEAK_ROOT_PASSWORD

A MySQL database instance has a weak password set for the root account. You should set a strong password for the instance. For more information, see MySQL users.

To remediate this finding, complete the following steps:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. On the Instance details page that loads, select the Users tab.

  4. Next to the root user, click More , and then select Change Password.

  5. Enter a new, strong password, and then click OK.

Learn about this finding type's supported assets and scan settings.

SSL not enforced

Category name in the API: SSL_NOT_ENFORCED

A Cloud SQL database instance doesn't require all incoming connections to use SSL.

To avoid leaking sensitive data in transit through unencrypted communications, all incoming connections to your SQL database instance should use SSL. Learn more about Configuring SSL/TLS.

To remediate this finding, allow only SSL connections for your SQL instances:

  1. Go to the Cloud SQL Instances page in the Google Cloud console.

    Go to Cloud SQL Instances

  2. Select the instance listed in the Security Health Analytics finding.

  3. On the Connections tab, click either Allow only SSL connections or Require trusted client certificates. For more information, see Enforce SSL/TLS encryption.

  4. If you chose Require trusted client certificates, create a new client certificate. For more information, see Create a new client certificate.

Learn about this finding type's supported assets and scan settings.

Too many KMS users

Category name in the API: TOO_MANY_KMS_USERS

Limit the number of principal users that can use cryptographic keys to three. The following predefined roles grant permissions to encrypt, decrypt, or sign data using cryptographic keys:

  • roles/owner
  • roles/cloudkms.cryptoKeyEncrypterDecrypter
  • roles/cloudkms.cryptoKeyEncrypter
  • roles/cloudkms.cryptoKeyDecrypter
  • roles/cloudkms.signer
  • roles/cloudkms.signerVerifier

For more information, see Permissions and roles.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

To remediate this finding, complete the following steps:

  1. Go to the Cloud KMS keys page in the Google Cloud console.

    Go to Cloud KMS keys

  2. Click the name of the key ring indicated in the finding.

  3. Click the name of the key indicated in the finding.

  4. Select the box next to the primary version, and then click Show Info Panel.

  5. Reduce the number of principals having permissions to encrypt, decrypt, or sign data to three or fewer. To revoke permissions, click Delete next to each principal.

Learn about this finding type's supported assets and scan settings.

User managed service account key

Category name in the API: USER_MANAGED_SERVICE_ACCOUNT_KEY

A user manages a service account key. Service account keys are a security risk if not managed correctly. You should choose a more secure alternative to service account keys whenever possible. If you must authenticate with a service account key, you are responsible for the security of the private key and for other operations described by Best practices for managing service account keys. If you are prevented from creating a service account key, service account key creation might be disabled for your organization. For more information, see Managing secure-by-default organization resources.

To remediate this finding, complete the following steps:

  1. Go to the Service Accounts page in the Google Cloud console.

    Go to Service Accounts

  2. If necessary, select the project indicated in the finding.

  3. Delete user-managed service account keys indicated in the finding, if they are not used by any application.

Learn about this finding type's supported assets and scan settings.

Weak SSL policy

Category name in the API: WEAK_SSL_POLICY

A Compute Engine instance has a weak SSL policy or uses the Google Cloud default SSL policy with TLS version less than 1.2.

HTTPS and SSL Proxy load balancers use SSL policies to determine the protocol and cipher suites used in the TLS connections established between users and the internet. These connections encrypt sensitive data to prevent malicious eavesdroppers from accessing it. A weak SSL policy permits clients using outdated versions of TLS to connect with a less secure cipher suite or protocol. For a list of recommended and outdated cipher suites, visit the iana.org TLS parameters page.

For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.

The remediation steps for this finding differ depending on whether this finding was triggered by the use of a default Google Cloud SSL policy or an SSL policy that allows a weak cipher suite or a minimum TLS version less than 1.2. Follow the procedure below that corresponds to the trigger of the finding.

Default Google Cloud SSL policy remediation

  1. Go to the Target proxies page in the Google Cloud console.

    Go to Target proxies

  2. Find the target proxy indicated in the finding and note forwarding rules in the In use by column.

  3. To create a new SSL policy, see Using SSL policies. The policy should have a Minimum TLS version of 1.2 and a Modern or Restricted Profile.

  4. To use a Custom profile , ensure the following cipher suites are disabled:

    • TLS_RSA_WITH_AES_128_GCM_SHA256
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_RSA_WITH_AES_128_CBC_SHA
    • TLS_RSA_WITH_AES_256_CBC_SHA
    • TLS_RSA_WITH_3DES_EDE_CBC_SHA
  5. Apply the SSL policy to each forwarding rule you previously noted.

Weak cipher suite or down-level TLS version allowed remediation

  1. In the Google Cloud console, go to the SSL policies page .

    Go to SSL policies

  2. Find the load balancer indicated in the In use by column.

  3. Click under the name of the policy.

  4. Click Edit.

  5. Change Minimum TLS version to TLS 1.2 and Profile to Modern or Restricted.

  6. To use a Custom profile, ensure the following cipher suites are disabled:

    • TLS_RSA_WITH_AES_128_GCM_SHA256
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_RSA_WITH_AES_128_CBC_SHA
    • TLS_RSA_WITH_AES_256_CBC_SHA
    • TLS_RSA_WITH_3DES_EDE_CBC_SHA
  7. Click Save.

Learn about this finding type's supported assets and scan settings.

Web UI enabled

Category name in the API: WEB_UI_ENABLED

The GKE web UI (dashboard) is enabled.

A highly privileged Kubernetes Service Accounts backs the Kubernetes web interface. If compromised, the service account can be abused. If you are already using the Google Cloud console, the Kubernetes web interface extends your attack surface unnecessarily. Learn about Disabling the Kubernetes web interface.

To remediate this finding, disable the Kubernetes web interface:

  1. Go to the Kubernetes clusters page in the Google Cloud console.

    Go to Kubernetes clusters

  2. Click the name of the cluster listed in the Security Health Analytics finding.

  3. Click Edit.

    If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.

  4. Click Add-ons. The section expands to display available add-ons.

  5. On the Kubernetes dashboard drop-down list, select Disabled.

  6. Click Save.

Learn about this finding type's supported assets and scan settings.

Workload Identity disabled

Category name in the API: WORKLOAD_IDENTITY_DISABLED

Workload Identity is disabled on a GKE cluster.

Workload Identity is the recommended way to access Google Cloud services from within GKE because it offers improved security properties and manageability. Enabling it protects some potentially sensitive system metadata from user workloads running on your cluster. Learn about Metadata concealment.

To remediate this finding, follow the guide to Enable Workload Identity on a cluster.

Learn about this finding type's supported assets and scan settings.

Remediate AWS misconfigurations

AWS Cloud Shell Full Access Restricted

Category name in the API: ACCESS_AWSCLOUDSHELLFULLACCESS_RESTRICTED

AWS CloudShell is a convenient way of running CLI commands against AWS services; a managed IAM policy ('AWSCloudShellFullAccess') provides full access to CloudShell, which allows file upload and download capability between a user's local system and the CloudShell environment. Within the CloudShell environment a user has sudo permissions, and can access the internet. So it is feasible to install file transfer software (for example) and move data from CloudShell to external internet servers.

Recommendation: Ensure access to AWSCloudShellFullAccess is restricted

Learn about this finding type's supported assets and scan settings.

Access Keys Rotated Every 90 Days or Less

Category name in the API: ACCESS_KEYS_ROTATED_90_DAYS_LESS

Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated.

Recommendation: Ensure access keys are rotated every 90 days or less

To remediate this finding, complete the following steps:

AWS console

  1. Go to Management Console (https://console.aws.amazon.com/iam)
  2. Click on Users
  3. Click on Security Credentials
  4. As an Administrator
    • Click on Make Inactive for keys that have not been rotated in 90 Days
  5. As an IAM User
    • Click on Make Inactive or Delete for keys which have not been rotated or used in 90 Days
  6. Click on Create Access Key
  7. Update programmatic call with new Access Key credentials

AWS CLI

  1. While the first access key is still active, create a second access key, which is active by default. Run the following command:
aws iam create-access-key

At this point, the user has two active access keys.

  1. Update all applications and tools to use the new access key.
  2. Determine whether the first access key is still in use by using this command:
aws iam get-access-key-last-used
  1. One approach is to wait several days and then check the old access key for any use before proceeding.

Even if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command:

aws iam update-access-key
  1. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key.

  2. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command:

aws iam delete-access-key

Learn about this finding type's supported assets and scan settings.

All Expired Ssl Tls Certificates Stored Aws Iam Removed

Category name in the API: ALL_EXPIRED_SSL_TLS_CERTIFICATES_STORED_AWS_IAM_REMOVED

To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console.

Recommendation: Ensure that all the expired SSL/TLS certificates stored in AWS IAM are removed

To remediate this finding, complete the following steps:

AWS console

Removing expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).

AWS CLI

To delete Expired Certificate run following command by replacing CERTIFICATE_NAME with the name of the certificate to delete:

aws iam delete-server-certificate --server-certificate-name <CERTIFICATE_NAME>

When the preceding command is successful, it does not return any output.

Learn about this finding type's supported assets and scan settings.

Autoscaling Group Elb Healthcheck Required

Category name in the API: AUTOSCALING_GROUP_ELB_HEALTHCHECK_REQUIRED

This checks whether your Auto Scaling groups that are associated with a load balancer are using Elastic Load Balancing health checks.

This ensures that the group can determine an instance's health based on additional tests provided by the load balancer. Using Elastic Load Balancing health checks can help support the availability of applications that use EC2 Auto Scaling groups.

Recommendation: Checks that all autoscaling groups assoc with a load balancer use healthchecks

To remediate this finding, complete the following steps:

AWS Console

To enable Elastic Load Balancing health checks

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. In the navigation pane, under Auto Scaling, choose Auto Scaling Groups.
  3. Select the check box for your group.
  4. Choose Edit.
  5. Under Health checks, for Health check type, choose ELB.
  6. For Health check grace period, enter 300.
  7. At the bottom of the page, choose Update.

For more information on using a load balancer with an Auto Scaling group, see the AWS Auto Scaling User Guide.

Learn about this finding type's supported assets and scan settings.

Auto Minor Version Upgrade Feature Enabled Rds Instances

Category name in the API: AUTO_MINOR_VERSION_UPGRADE_FEATURE_ENABLED_RDS_INSTANCES

Ensure that RDS database instances have the Auto Minor Version Upgrade flag enabled in order to receive automatically minor engine upgrades during the specified maintenance window. So, RDS instances can get the new features, bug fixes, and security patches for their database engines.

Recommendation: Ensure Auto Minor Version Upgrade feature is Enabled for RDS Instances

To remediate this finding, complete the following steps:

AWS console

  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.
  2. In the left navigation panel, click on Databases.
  3. Select the RDS instance that wants to update.
  4. Click on the Modify button placed on the top right side.
  5. On the Modify DB Instance: <instance identifier> page, In the Maintenance section, select Auto minor version upgrade click on the Yes radio button.
  6. At the bottom of the page click on Continue, check to Apply Immediately to apply the changes immediately, or select Apply during the next scheduled maintenance window to avoid any downtime.
  7. Review the changes and click on Modify DB Instance. The instance status should change from available to modifying and back to available. Once the feature is enabled, the Auto Minor Version Upgrade status should change to Yes.

AWS CLI

  1. Run describe-db-instances command to list all RDS database instance names, available in the selected AWS region:
aws rds describe-db-instances --region <regionName> --query 'DBInstances[*].DBInstanceIdentifier'
  1. The command output should return each database instance identifier.
  2. Run the modify-db-instance command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove --apply-immediately to apply changes during the next scheduled maintenance window and avoid any downtime:
aws rds modify-db-instance --region <regionName> --db-instance-identifier <dbInstanceIdentifier> --auto-minor-version-upgrade --apply-immediately
  1. The command output should reveal the new configuration metadata for the RDS instance and check AutoMinorVersionUpgrade parameter value.
  2. Run describe-db-instances command to check if the Auto Minor Version Upgrade feature has been successfully enable:
aws rds describe-db-instances --region <regionName> --db-instance-identifier <dbInstanceIdentifier> --query 'DBInstances[*].AutoMinorVersionUpgrade'
  1. The command output should return the feature current status set to true, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance.

Learn about this finding type's supported assets and scan settings.

Aws Config Enabled All Regions

Category name in the API: AWS_CONFIG_ENABLED_ALL_REGIONS

AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. It is recommended AWS Config be enabled in all regions.

Recommendation: Ensure AWS Config is enabled in all regions

To remediate this finding, complete the following steps:

AWS console

  1. Select the region you want to focus on in the top right of the console
  2. Click Services
  3. Click Config
  4. If a Config recorder is enabled in this region, you should navigate to the Settings page from the navigation menu on the left hand side. If a Config recorder is not yet enabled in this region then you should select "Get Started".
  5. Select "Record all resources supported in this region"
  6. Choose to include global resources (IAM resources)
  7. Specify an S3 bucket in the same account or in another managed AWS account
  8. Create an SNS Topic from the same AWS account or another managed AWS account

AWS CLI

  1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the AWS Config Service prerequisites.
  2. Run this command to create a new configuration recorder:
aws configservice put-configuration-recorder --configuration-recorder name=default,roleARN=arn:aws:iam::012345678912:role/myConfigRole --recording-group allSupported=true,includeGlobalResourceTypes=true
  1. Create a delivery channel configuration file locally which specifies the channel attributes, populated from the prerequisites set up previously:
{
 "name": "default",
 "s3BucketName": "my-config-bucket",
 "snsTopicARN": "arn:aws:sns:us-east-1:012345678912:my-config-notice",
 "configSnapshotDeliveryProperties": {
 "deliveryFrequency": "Twelve_Hours"
 }
}
  1. Run this command to create a new delivery channel, referencing the json configuration file made in the previous step:
aws configservice put-delivery-channel --delivery-channel file://deliveryChannel.json
  1. Start the configuration recorder by running the following command:
aws configservice start-configuration-recorder --configuration-recorder-name default

Learn about this finding type's supported assets and scan settings.

Aws Security Hub Enabled

Category name in the API: AWS_SECURITY_HUB_ENABLED

Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues. When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie. You can also enable integrations with AWS partner security products.

Recommendation: Ensure AWS Security Hub is enabled

To remediate this finding, complete the following steps:

AWS console

  1. Use the credentials of the IAM identity to sign in to the Security Hub console.
  2. When you open the Security Hub console for the first time, choose Enable AWS Security Hub.
  3. On the welcome page, Security standards list the security standards that Security Hub supports.
  4. Choose Enable Security Hub.

AWS CLI

  1. Run the enable-security-hub command. To enable the default standards, include --enable-default-standards. aws securityhub enable-security-hub --enable-default-standards

  2. To enable the security hub without the default standards, include --no-enable-default-standards. aws securityhub enable-security-hub --no-enable-default-standards

Learn about this finding type's supported assets and scan settings.

Cloudtrail Logs Encrypted Rest Using Kms Cmks

Category name in the API: CLOUDTRAIL_LOGS_ENCRYPTED_REST_USING_KMS_CMKS

AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS.

Recommendation: Ensure CloudTrail logs are encrypted at rest using KMS CMKs

To remediate this finding, complete the following steps:

AWS console

  1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail
  2. In the left navigation pane, choose Trails .
  3. Click on a Trail
  4. Under the S3 section click on the edit button (pencil icon)
  5. Click Advanced
  6. Select an existing CMK from the KMS key Id drop-down menu
    • Note: Ensure the CMK is located in the same region as the S3 bucket
    • Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided here for editing the selected CMK Key policy
  7. Click Save
  8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files.
  9. Click Yes

AWS CLI

aws cloudtrail update-trail --name <trail_name> --kms-id <cloudtrail_kms_key>
aws kms put-key-policy --key-id <cloudtrail_kms_key> --policy <cloudtrail_kms_key_policy>

Learn about this finding type's supported assets and scan settings.

Cloudtrail Log File Validation Enabled

Category name in the API: CLOUDTRAIL_LOG_FILE_VALIDATION_ENABLED

CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails.

Recommendation: Ensure CloudTrail log file validation is enabled

To remediate this finding, complete the following steps:

AWS console

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail
  2. Click on Trails on the left navigation pane
  3. Click on target trail
  4. Within the General details section click edit
  5. Under the Advanced settings section
  6. Check the enable box under Log file validation
  7. Click Save changes

AWS CLI

aws cloudtrail update-trail --name <trail_name> --enable-log-file-validation

Note that periodic validation of logs using these digests can be performed by running the following command:

aws cloudtrail validate-logs --trail-arn <trail_arn> --start-time <start_time> --end-time <end_time>

Learn about this finding type's supported assets and scan settings.

Cloudtrail Trails Integrated Cloudwatch Logs

Category name in the API: CLOUDTRAIL_TRAILS_INTEGRATED_CLOUDWATCH_LOGS

AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, real time analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs.

Recommendation: Ensure CloudTrail trails are integrated with CloudWatch Logs

To remediate this finding, complete the following steps:

AWS console

  1. Login to the CloudTrail console at https://console.aws.amazon.com/cloudtrail/
  2. Select the Trail the needs to be updated.
  3. Scroll down to CloudWatch Logs
  4. Click Edit
  5. Under CloudWatch Logs click the box Enabled
  6. Under Log Group pick new or select an existing log group
  7. Edit the Log group name to match the CloudTrail or pick the existing CloudWatch Group.
  8. Under IAM Role pick new or select an existing.
  9. Edit the Role name to match the CloudTrail or pick the existing IAM Role.
  10. Click `Save changes.

AWS CLI

aws cloudtrail update-trail --name <trail_name> --cloudwatch-logs-log-group-arn <cloudtrail_log_group_arn> --cloudwatch-logs-role-arn <cloudtrail_cloudwatchLogs_role_arn>

Learn about this finding type's supported assets and scan settings.

Cloudwatch Alarm Action Check

Category name in the API: CLOUDWATCH_ALARM_ACTION_CHECK

This checks whether Amazon Cloudwatch has actions defined when an alarm transitions between the states 'OK', 'ALARM' and 'INSUFFICIENT_DATA'.

Configuring actions for the ALARM state in Amazon CloudWatch alarms is very important to trigger an immediate response when monitored metrics breach thresholds. It ensures quick problem resolution, reduces downtime and enables automated remediation, maintaining system health and preventing outages.

Alarms have at least one action. Alarms have at least one action when the alarm transitions to the 'INSUFFICIENT_DATA' state from any other state. (Optional) Alarms have at least one action when the alarm transitions to an 'OK' state from any other state.

Recommendation: Checks whether CloudWatch alarms have at least one alarm action, one INSUFFICIENT_DATA action, or one OK action enabled.

To remediate this finding, complete the following steps:

AWS Console

To configure ALARM actions for your Amazon CloudWatch alarms, do the following.

  1. Open the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
  2. In the navigation pane, under 'Alarms', select 'All alarms'.
  3. Choose the Amazon CloudWatch alarm you want to modify, choose 'Actions' and select 'Edit'.
  4. From the left choose the 'Step 2 - optional Configure actions'
  5. For the 'Alarm state trigger' select the 'In alarm' option to setup an ALARM-based action.
  6. To send a notification to a newly created SNS topic, select 'Create new topic'.
  7. In the 'Create new topic...' box specify a unique SNS topic name.
  8. In the 'Email endpoints that will receive the notification…' box specify one or more email addresses.
  9. Then select 'Create Topic' to create the required Amazon SNS Topic.
  10. At the bottom right select 'Next', 'Next' and choose 'Update alarm' to apply the changes.
  11. Open your email client and in the mail from AWS Notifications, click on the link to confirm your subscription to the SNS topic in question.
  12. Repeat steps 4 - 11 and during step 5, choosing the 'OK' and 'Insufficient data' for the 'Alarm state trigger' to setup actions for those two states.
  13. Repeat the process for all other CloudWatch alarms within the same AWS region.
  14. Repeat the process for all other CloudWatch alarms in all other AWS regions.

Learn about this finding type's supported assets and scan settings.

Cloudwatch Log Group Encrypted

Category name in the API: CLOUDWATCH_LOG_GROUP_ENCRYPTED

This check ensures CloudWatch logs are configured with KMS.

Log group data is always encrypted in CloudWatch Logs. By default, CloudWatch Logs uses server-side encryption for the log data at rest. As an alternative, you can use AWS Key Management Service for this encryption. If you do, the encryption is done using an AWS KMS key. Encryption using AWS KMS is enabled at the log group level, by associating a KMS key with a log group, either when you create the log group or after it exists.

Recommendation: Checks that all log groups in Amazon CloudWatch Logs are encrypted with KMS

Learn about this finding type's supported assets and scan settings.

CloudTrail CloudWatch Logs Enabled

Category name in the API: CLOUD_TRAIL_CLOUD_WATCH_LOGS_ENABLED

This control checks whether CloudTrail trails are configured to send logs to CloudWatch Logs. The control fails if the CloudWatchLogsLogGroupArn property of the trail is empty.

CloudTrail records AWS API calls that are made in a given account. The recorded information includes the following:

  • The identity of the API caller
  • The time of the API call
  • The source IP address of the API caller
  • The request parameters
  • The response elements returned by the AWS service

CloudTrail uses Amazon S3 for log file storage and delivery. You can capture CloudTrail logs in a specified S3 bucket for long-term analysis. To perform real-time analysis, you can configure CloudTrail to send logs to CloudWatch Logs.

For a trail that is enabled in all Regions in an account, CloudTrail sends log files from all of those Regions to a CloudWatch Logs log group.

Security Hub recommends that you send CloudTrail logs to CloudWatch Logs. Note that this recommendation is intended to ensure that account activity is captured, monitored, and appropriately alarmed on. You can use CloudWatch Logs to set this up with your AWS services. This recommendation does not preclude the use of a different solution.

Sending CloudTrail logs to CloudWatch Logs facilitates real-time and historic activity logging based on user, API, resource, and IP address. You can use this approach to establish alarms and notifications for anomalous or sensitivity account activity.

Recommendation: Checks that all CloudTrail trails are configured to send logs to AWS CloudWatch

Learn about this finding type's supported assets and scan settings.

No AWS Credentials in CodeBuild Project Environment Variables

Category name in the API: CODEBUILD_PROJECT_ENVVAR_AWSCRED_CHECK

This checks whether the project contains the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

Authentication credentials AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should never be stored in clear text, as this could lead to unintended data exposure and unauthorized access.

Recommendation: Checks that all projects containing env variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not in plaintext

Learn about this finding type's supported assets and scan settings.

Codebuild Project Source Repo Url Check

Category name in the API: CODEBUILD_PROJECT_SOURCE_REPO_URL_CHECK

This checks whether an AWS CodeBuild project Bitbucket source repository URL contains personal access tokens or a user name and password. The control fails if the Bitbucket source repository URL contains personal access tokens or a user name and password.

Sign-in credentials shouldn't be stored or transmitted in clear text or appear in the source repository URL. Instead of personal access tokens or sign-in credentials, you should access your source provider in CodeBuild, and change your source repository URL to contain only the path to the Bitbucket repository location. Using personal access tokens or sign-in credentials could result in unintended data exposure or unauthorized access.

Recommendation: Checks that all projects using github or bitbucket as the source use oauth

Learn about this finding type's supported assets and scan settings.

Credentials Unused 45 Days Greater Disabled

Category name in the API: CREDENTIALS_UNUSED_45_DAYS_GREATER_DISABLED

AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. It is recommended that all credentials that have been unused in 45 or greater days be deactivated or removed.

Recommendation: Ensure credentials unused for 45 days or greater are disabled

To remediate this finding, complete the following steps:

AWS console

Perform the following to manage Unused Password (IAM user console access)

  1. Login to the AWS Management Console:
  2. Click Services
  3. Click IAM
  4. Click on Users
  5. Click on Security Credentials
  6. Select user whose Console last sign-in is greater than 45 days
  7. Click Security credentials
  8. In section Sign-in credentials, Console password click Manage
  9. Under Console Access select Disable 10.Click Apply

Perform the following to deactivate Access Keys:

  1. Login to the AWS Management Console:
  2. Click Services
  3. Click IAM
  4. Click on Users
  5. Click on Security Credentials
  6. Select any access keys that are over 45 days old and that have been used and
    • Click on Make Inactive
  7. Select any access keys that are over 45 days old and that have not been used and
    • Click the X to Delete

Learn about this finding type's supported assets and scan settings.

Default Security Group Vpc Restricts All Traffic

Category name in the API: DEFAULT_SECURITY_GROUP_VPC_RESTRICTS_ALL_TRAFFIC

A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic.

The default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation.

NOTE: When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups.

Recommendation: Ensure the default security group of every VPC restricts all traffic

Learn about this finding type's supported assets and scan settings.

Dms Replication Not Public

Category name in the API: DMS_REPLICATION_NOT_PUBLIC

Checks whether AWS DMS replication instances are public. To do this, it examines the value of the PubliclyAccessible field.

A private replication instance has a private IP address that you cannot access outside of the replication network. A replication instance should have a private IP address when the source and target databases are in the same network. The network must also be connected to the replication instance's VPC using a VPN, AWS Direct Connect, or VPC peering. To learn more about public and private replication instances, see Public and private replication instances in the AWS Database Migration Service User Guide.

You should also ensure that access to your AWS DMS instance configuration is limited to only authorized users. To do this, restrict users' IAM permissions to modify AWS DMS settings and resources.

Recommendation: Checks whether AWS Database Migration Service replication instances are public

Learn about this finding type's supported assets and scan settings.

Do Setup Access Keys During Initial User Setup All Iam Users Console

Category name in the API: DO_SETUP_ACCESS_KEYS_DURING_INITIAL_USER_SETUP_ALL_IAM_USERS_CONSOLE

AWS console defaults to no check boxes selected when creating a new IAM user. When creating the IAM User credentials you have to determine what type of access they require.

Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user.

AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.

Recommendation: Do not setup access keys during initial user setup for all IAM users that have a console password

To remediate this finding, complete the following steps:

AWS console

  1. Login to the AWS Management Console:
  2. Click Services
  3. Click IAM
  4. Click on Users
  5. Click on Security Credentials
  6. As an Administrator
    • Click on the X (Delete) for keys that were created at the same time as the user profile but have not been used.
  7. As an IAM User
    • Click on the X (Delete) for keys that were created at the same time as the user profile but have not been used.

AWS CLI

aws iam delete-access-key --access-key-id <access-key-id-listed> --user-name <users-name>

Learn about this finding type's supported assets and scan settings.

Dynamodb Autoscaling Enabled

Category name in the API: DYNAMODB_AUTOSCALING_ENABLED

This checks whether an Amazon DynamoDB table can scale its read and write capacity as needed. This control passes if the table uses either on-demand capacity mode or provisioned mode with auto scaling configured. Scaling capacity with demand avoids throttling exceptions, which helps to maintain availability of your applications.

DynamoDB tables in on-demand capacity mode are only limited by the DynamoDB throughput default table quotas. To raise these quotas, you can file a support ticket through AWS Support.

DynamoDB tables in provisioned mode with auto scaling adjust the provisioned throughput capacity dynamically in response to traffic patterns. For additional information on DynamoDB request throttling, see Request throttling and burst capacity in the Amazon DynamoDB Developer Guide.

Recommendation: DynamoDB tables should automatically scale capacity with demand

Learn about this finding type's supported assets and scan settings.

Dynamodb In Backup Plan

Category name in the API: DYNAMODB_IN_BACKUP_PLAN

This control evaluates whether a DynamoDB table is covered by a backup plan. The control fails if a DynamoDB table isn't covered by a backup plan. This control only evaluates DynamoDB tables that are in the ACTIVE state.

Backups help you recover more quickly from a security incident. They also strengthen the resilience of your systems. Including DynamoDB tables in a backup plan helps you protect your data from unintended loss or deletion.

Recommendation: DynamoDB tables should be covered by a backup plan

Learn about this finding type's supported assets and scan settings.

Dynamodb Pitr Enabled

Category name in the API: DYNAMODB_PITR_ENABLED

Point In Time Recovery (PITR) is one of the mechanisms available to backup DynamoDB tables.

A point in time backup is kept for 35 days. In case your requirement is for longer retention, please see Set up scheduled backups for Amazon DynamoDB using AWS Backup in the AWS Documentation.

Recommendation: Checks that point in time recovery (PITR) is enabled for all AWS DynamoDB tables

To remediate this finding, complete the following steps:

Terraform

In order to set PITR for DynamoDB tables, set the point_in_time_recovery block:

resource "aws_dynamodb_table" "example" {
  # ... other configuration ...
  point_in_time_recovery {
    enabled = true
  }
}

AWS Console

To enable DynamoDB point-in-time recovery for an existing table

  1. Open the DynamoDB console at https://console.aws.amazon.com/dynamodb/.
  2. Choose the table that you want to work with, and then choose Backups.
  3. In the Point-in-time Recovery section, under Status, choose Enable.
  4. Choose Enable again to confirm the change.

AWS CLI

aws dynamodb update-continuous-backups \
  --table-name "GameScoresOnDemand" \
  --point-in-time-recovery-specification "PointInTimeRecoveryEnabled=true"

Learn about this finding type's supported assets and scan settings.

Dynamodb Table Encrypted Kms

Category name in the API: DYNAMODB_TABLE_ENCRYPTED_KMS

Checks whether all DynamoDB tables are encrypted with a customer managed KMS key (non-default).

Recommendation: Checks that all DynamoDB tables are encrypted with AWS Key Management Service (KMS)

To remediate this finding, complete the following steps:

Terraform

To remediate this control, create an AWS KMS Key and use it to encrypt the violating DynamoDB resource.

resource "aws_kms_key" "dynamodb_encryption" {
  description         = "Used for DynamoDB encryption configuration"
  enable_key_rotation = true
}

resource "aws_dynamodb_table" "example" {
  # ... other configuration ...
  server_side_encryption {
    enabled     = true
    kms_key_arn = aws_kms_key.dynamodb_encryption.arn
  }
}

AWS Console

Assuming there is an existing AWS KMS key available to encrypt DynamoDB.

To change a DynamoDB table encryption to a customer managed and owned KMS key.

  1. Open the DynamoDB console at https://console.aws.amazon.com/dynamodb/.
  2. Choose the table that you want to work with, and then choose Additional settings.
  3. Under Encryption, choose Manage encryption.
  4. For Encryption at rest, choose Stored in your account, and owned and managed by you.
  5. Select the AWS Key to use. Save changes.

AWS CLI

aws dynamodb update-table \
  --table-name <value> \
  --sse-specification "Enabled=true,SSEType=KMS,KMSMasterKeyId=<kms_key_arn>"

Learn about this finding type's supported assets and scan settings.

Ebs Optimized Instance

Category name in the API: EBS_OPTIMIZED_INSTANCE

Checks whether EBS optimization is enabled for your EC2 instances that can be EBS-optimized

Recommendation: Checks that EBS optimization is enabled for all instances that support EBS optimization

Learn about this finding type's supported assets and scan settings.

Ebs Snapshot Public Restorable Check

Category name in the API: EBS_SNAPSHOT_PUBLIC_RESTORABLE_CHECK

Checks whether Amazon Elastic Block Store snapshots are not public. The control fails if Amazon EBS snapshots are restorable by anyone.

EBS snapshots are used to back up the data on your EBS volumes to Amazon S3 at a specific point in time. You can use the snapshots to restore previous states of EBS volumes. It is rarely acceptable to share a snapshot with the public. Typically the decision to share a snapshot publicly was made in error or without a complete understanding of the implications. This check helps ensure that all such sharing was fully planned and intentional.

Recommendation: Amazon EBS snapshots should not be publicly restorable

To remediate this finding, complete the following steps:

AWS Console

To remediate this issue, update your EBS snapshot to make it private instead of public.

To make a public EBS snapshot private:

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. In the navigation pane, under Elastic Block Store, choose Snapshots menu and then choose your public snapshot.
  3. From Actions, choose Modify permissions.
  4. Choose Private.
  5. (Optional) Add the AWS account numbers of the authorized accounts to share your snapshot with and choose Add Permission.
  6. Choose Save.

Learn about this finding type's supported assets and scan settings.

Ebs Volume Encryption Enabled All Regions

Category name in the API: EBS_VOLUME_ENCRYPTION_ENABLED_ALL_REGIONS

Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store (EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported.

Recommendation: Ensure EBS Volume Encryption is Enabled in all Regions

To remediate this finding, complete the following steps:

AWS console

  1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/
  2. Under Account attributes, click EBS encryption.
  3. Click Manage.
  4. Click the Enable checkbox.
  5. Click Update EBS encryption
  6. Repeat for every region requiring the change.

Note: EBS volume encryption is configured per region.

AWS CLI

  1. Run aws --region <region> ec2 enable-ebs-encryption-by-default
  2. Verify that "EbsEncryptionByDefault": true is displayed.
  3. Repeat every region requiring the change.

Note: EBS volume encryption is configured per region.

Learn about this finding type's supported assets and scan settings.

Ec2 Instances In Vpc

Category name in the API: EC2_INSTANCES_IN_VPC

Amazon VPC provides more security functionality than EC2 Classic. It is recommended that all nodes belong to an Amazon VPC.

Recommendation: Ensures that all instances belong to a VPC

To remediate this finding, complete the following steps:

Terraform

If you have EC2 Classic Instances defined in Terraform, you may modify your resources to be part of a VPC. This migration will depend on an architecture that best suits your needs. The following is a simple Terraform example that illustrates a publicly exposed EC2 in a VPC.

  resource "aws_vpc" "example_vpc" {
    cidr_block = "10.0.0.0/16"
  }

  resource "aws_subnet" "example_public_subnet" {
    vpc_id            = aws_vpc.example_vpc.id
    cidr_block        = "10.0.1.0/24"
    availability_zone = "1a"
  }

  resource "aws_internet_gateway" "example_igw" {
    vpc_id = aws_vpc.example_vpc.id
  }

  resource "aws_key_pair" "example_key" {
    key_name   = "web-instance-key"
    public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email@example.com"
  }

  resource "aws_security_group" "web_sg" {
    name   = "http and ssh"
    vpc_id = aws_vpc.some_custom_vpc.id

    ingress {
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
      from_port   = 22
      to_port     = 22
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
      from_port   = 0
      to_port     = 0
      protocol    = -1
      cidr_blocks = ["0.0.0.0/0"]
    }
  }

  resource "aws_instance" "web" {
    ami                    = <ami_id>
    instance_type          = <instance_flavor>
    key_name               = aws_key_pair.example_key.name
    monitoring             = true
    subnet_id              = aws_subnet.example_public_subnet.id
    vpc_security_group_ids = [aws_security_group.web_sg.id]
    metadata_options {
      http_tokens = "required"
    }
  }

AWS Console

In order to migrate your EC2 Classic to VPC, see Migrate from EC2-Classic to a VPC

AWS CLI

This AWS CLI example illustrates the same infrastructure defined with Terraform. It's a simple example of a publicly exposed EC2 instance in a VPC

Create VPC

  aws ec2 create-vpc \
  --cidr-block 10.0.0.0/16

Create Public Subnet

  aws ec2 create-subnet \
  --availability-zone 1a \
  --cidr-block 10.0.1.0/24 \
  --vpc-id <id_from_create-vpc_command>

Create Internet Gateway

  aws ec2 create-internet-gateway

Attach Internet Gateway to VPC

  aws ec2 attach-internet-gateway \
  --internet-gateway-id <id_from_create-internet-gateway_command> \
  --vpc-id <id_from_create-vpc_command>

Create Key Pair - This will save your private key in /.ssh/web-instance-key.pem

  aws ec2 create-key-pair \
  --key-name web-instance-key \
  --query "KeyMaterial" \
  --output text > ~/.ssh/web-instance-key.pem && \
  chmod 400 ~/.ssh/web-instance-key.pem

Create Security Group

  aws ec2 create-security-group \
  --group-name "http and ssh" \
  --vpc-id <id_from_create-vpc_command>

Create Security Group Rules - For more restricted access, define a more restricted CIDR for SSH on port 22

  aws ec2 authorize-security-group-ingress \
  --group-id <id_from_create-security-group_command>
  --protocol tcp \
  --port 80 \
  --cidr 0.0.0.0/0

  aws ec2 authorize-security-group-ingress \
  --group-id <id_from_create-security-group_command>
  --protocol tcp \
  --port 22 \
  --cidr 0.0.0.0/0

  aws ec2 authorize-security-group-egress \
  --group-id <id_from_create-security-group_command>
  --protocol -1 \
  --port 0 \
  --cidr 0.0.0.0/0

Create EC2 Instance

  aws ec2 run-instances \
  --image-id <ami_id> \
  --instance-type <instance_flavor> \
  --metadata-options "HttpEndpoint=enabled,HttpTokens=required" \
  --monitoring true \
  --key-name web-instance-key \
  --subnet-id <id_from_create-subnet_command> \
  --security-group-ids <id_from_create-security-group_command>

Learn about this finding type's supported assets and scan settings.

Ec2 Instance No Public Ip

Category name in the API: EC2_INSTANCE_NO_PUBLIC_IP

EC2 instances that have a public IP address are at an increased risk of compromise. It is recommended that EC2 instances not be configured with a public IP address.

Recommendation: Ensures no instances have a public IP

To remediate this finding, complete the following steps:

Terraform

Use the associate_public_ip_address = false argument with the aws_instance resource to ensure EC2 instances are provisioned without a public IP address


resource "aws_instance" "no_public_ip" {
  ...
  associate_public_ip_address = false
}

AWS Console

By default, nondefault subnets have the IPv4 public addressing attribute set to false, and default subnets have this attribute set to true. An exception is a nondefault subnet created by the Amazon EC2 launch instance wizard — the wizard sets the attribute to true. You can modify this attribute using the Amazon VPC console.

To modify your subnet's public IPv4 addressing behavior

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
  2. In the navigation pane, choose Subnets.
  3. Select your subnet and choose Actions, Edit subnet settings.
  4. The Enable auto-assign public IPv4 address check box, if selected, requests a public IPv4 address for all instances launched into the selected subnet. Select or clear the check box as required, and then choose Save.

AWS CLI

The following command runs an EC2 Instance in a default subnet without associating a public IP address to it.

aws ec2 run-instances \
--image-id <ami_id> \
--instance-type <instance_flavor> \
--no-associate-public-ip-address \
--key-name MyKeyPair

Learn about this finding type's supported assets and scan settings.

Ec2 Managedinstance Association Compliance Status Check

Category name in the API: EC2_MANAGEDINSTANCE_ASSOCIATION_COMPLIANCE_STATUS_CHECK

A State Manager association is a configuration that is assigned to your managed instances. The configuration defines the state that you want to maintain on your instances. For example, an association can specify that antivirus software must be installed and running on your instances, or that certain ports must be closed. EC2 instances that have an association with AWS Systems Manager are under management of Systems Manager which makes it easier to apply patches, fix misconfigurations, and respond to security events.

Recommendation: Checks the compliance status AWS systems manager association

To remediate this finding, complete the following steps:

Terraform

The following example demonstrates how to create a simple EC2 instance, an AWS Systems Manager (SSM) Document and an association between SSM and the EC2 instance. Documents supported are of type Command and Policy.

resource "aws_instance" "web" {
  ami           = "<iam_id>"
  instance_type = "<instance_flavor>"
}

resource "aws_ssm_document" "check_ip" {
  name          = "check-ip-config"
  document_type = "Command"

  content = <<DOC
  {
    "schemaVersion": "1.2",
    "description": "Check ip configuration of a Linux instance.",
    "parameters": {

    },
    "runtimeConfig": {
      "aws:runShellScript": {
        "properties": [
          {
            "id": "0.aws:runShellScript",
            "runCommand": ["ifconfig"]
          }
        ]
      }
    }
  }
DOC
}

resource "aws_ssm_association" "check_ip_association" {
  name = aws_ssm_document.check_ip.name

  targets {
    key    = "InstanceIds"
    values = [aws_instance.web.id]
  }
}

AWS Console

For information on configuring associations with AWS Systems Manager using the console, see Creating Associations in the AWS Systems Manager documentation.

AWS CLI

Create an SSM Document

aws ssm create-document \
--name <document_name> \
--content  file://path/to-file/document.json \
--document-type "Command"

Create an SSM Association

aws ssm create-association \
--name <association_name> \
--targets "Key=InstanceIds,Values=<instance-id-1>,<instance-id-2>"

Learn about this finding type's supported assets and scan settings.

Ec2 Managedinstance Patch Compliance Status Check

Category name in the API: EC2_MANAGEDINSTANCE_PATCH_COMPLIANCE_STATUS_CHECK

This control checks whether the status of the AWS Systems Manager association compliance is COMPLIANT or NON_COMPLIANT after the association is run on an instance. The control fails if the association compliance status is NON_COMPLIANT.

A State Manager association is a configuration that is assigned to your managed instances. The configuration defines the state that you want to maintain on your instances. For example, an association can specify that antivirus software must be installed and running on your instances or that certain ports must be closed.

After you create one or more State Manager associations, compliance status information is immediately available to you. You can view the compliance status in the console or in response to AWS CLI commands or corresponding Systems Manager API actions. For associations, Configuration Compliance shows the compliance status (Compliant or Non-compliant). It also shows the severity level assigned to the association, such as Critical or Medium.

To learn more about State Manager association compliance, see About State Manager association compliance in the AWS Systems Manager User Guide.

Recommendation: Checks the status of AWS Systems Manager patch compliance

Learn about this finding type's supported assets and scan settings.

Ec2 Metadata Service Allows Imdsv2

Category name in the API: EC2_METADATA_SERVICE_ALLOWS_IMDSV2

When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method).

Recommendation: Ensure that EC2 Metadata Service only allows IMDSv2

Learn about this finding type's supported assets and scan settings.

Ec2 Volume Inuse Check

Category name in the API: EC2_VOLUME_INUSE_CHECK

Identifying and removing unattached (unused) Elastic Block Store (EBS) volumes in your AWS account in order to lower the cost of your monthly AWS bill. Deleting unused EBS volumes also reduces the risk of confidential/sensitive data leaving your premise. Additionally, this control also checks whether EC2 instances archived configured to delete volumes on termination.

By default, EC2 instances are configured to delete the data in any EBS volumes associated with the instance, and to delete the root EBS volume of the instance. However, any non-root EBS volumes attached to the instance, at launch or during execution, get persisted after termination by default.

Recommendation: Checks whether EBS volumes are attached to EC2 instances and configured for deletion on instance termination

To remediate this finding, complete the following steps:

Terraform

In order to prevent this scenario using Terraform, create EC2 instances with embedded EBS blocks. This ensures that any EBS blocks associated with the instance (not only the root) will be deleted on instance termination by having the attribute ebs_block_device.delete_on_termination defaulted to true.

resource "aws_instance" "web" {
    ami                    = <ami_id>
    instance_type          = <instance_flavor>
    ebs_block_device {
      delete_on_termination = true # Default
      device_name           = "/dev/sdh"
    }

AWS Console

To delete an EBS volume using the console

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. In the navigation pane, choose Volumes.
  3. Select the volume to delete and choose Actions, Delete volume.
  4. Note: If Delete volume is greyed out, the volume is attached to an instance. You must detach the volume from the instance before it can be deleted.
  5. In the confirmation dialog box, choose Delete.

AWS CLI

This example command deletes an available volume with the volume ID of vol-049df61146c4d7901. If the command succeeds, no output is returned.

aws ec2 delete-volume --volume-id vol-049df61146c4d7901

Learn about this finding type's supported assets and scan settings.

Efs Encrypted Check

Category name in the API: EFS_ENCRYPTED_CHECK

Amazon EFS supports two forms of encryption for file systems, encryption of data in transit and encryption at rest. This checks that all EFS file systems are configured with encryption-at-rest across all enabled regions in the account.

Recommendation: Checks whether EFS is configured to encrypt file data using KMS

To remediate this finding, complete the following steps:

Terraform

The following code snippet can be used to create a KMS encrypted EFS (Note: kms_key_id attribute is optional, and a key will be created if no kms key id is passed)

resource "aws_efs_file_system" "encrypted-efs" {
  creation_token = "my-kms-encrypted-efs"
  encrypted      = true
  kms_key_id     = "arn:aws:kms:us-west-2:12344375555:key/16393ebd-3348-483f-b162-99b6648azz23"

  tags = {
    Name = "MyProduct"
  }
}

AWS Console

To configure EFS with encryption using the AWS console, see Encrypting a file system at rest using the console.

AWS CLI

It is important to notice that while creating EFS from the console enables encryption at rest by default, that is not true for EFS created using the CLI, API or SDK. The following example allows you to create an encrypted file system in your infrastructure.

aws efs create-file-system \
--backup \
--encrypted \
--region us-east-1 \

Learn about this finding type's supported assets and scan settings.

Efs In Backup Plan

Category name in the API: EFS_IN_BACKUP_PLAN

Amazon best practices recommend configuring backups for your Elastic File Systems (EFS). This checks all EFS across every enabled region in your AWS account for enabled backups.

Recommendation: Checks whether EFS filesystems are included in AWS Backup plans

To remediate this finding, complete the following steps:

Terraform

Use the aws_efs_backup_policy resource to configure a backup policy for EFS file systems.

resource "aws_efs_file_system" "encrypted-efs" {
  creation_token = "my-encrypted-efs"
  encrypted      = true

  tags = merge({
    Name = "${local.resource_prefix.value}-efs"
    }, {
    git_file             = "terraform/aws/efs.tf"
    git_org              = "your_git_org"
    git_repo             = "your_git_repo"
  })
}

resource "aws_efs_backup_policy" "policy" {
  file_system_id = aws_efs_file_system.encrypted-efs.id

  backup_policy {
    status = "ENABLED"
  }
}

AWS Console

There are two options for backing up EFS: AWS Backup service and EFS-to-EFS backup solution. In order to remediate non-backed up EFS using the console, see:

  1. Using AWS Backup to back up and restore Amazon EFS file systems
  2. EFS-to-EFS Backup

AWS CLI

There are a few options to create compliant EFS file systems using the CLI:

  1. Create an EFS with automatic backup enabled (default for One Zone storage and conditional to backup availability in the AWS Region)
  2. Create an EFS and put a backup policy

However, assuming the remediation needs to happen in existing EFS, the best option is to create a backup policy and associate it to your non-compliant EFS. You will need one command for every EFS in your infrastructure.

arr=( $(aws efs describe-file-systems | jq -r '.FileSystems[].FileSystemId') )
for efs in "${arr[@]}"
do
  aws efs put-backup-policy \
  --file-system-id "${efs}" \
  --backup-policy "Status=ENABLED"
done

Learn about this finding type's supported assets and scan settings.

Elb Acm Certificate Required

Category name in the API: ELB_ACM_CERTIFICATE_REQUIRED

Checks whether the Classic Load Balancer uses HTTPS/SSL certificates provided by AWS Certificate Manager (ACM). The control fails if the Classic Load Balancer configured with HTTPS/SSL listener does not use a certificate provided by ACM.

To create a certificate, you can use either ACM or a tool that supports the SSL and TLS protocols, such as OpenSSL. Security Hub recommends that you use ACM to create or import certificates for your load balancer.

ACM integrates with Classic Load Balancers so that you can deploy the certificate on your load balancer. You also should automatically renew these certificates.

Recommendation: Checks that all Classic Load Balancers use SSL certificates provided by AWS Certificate Manager

Learn about this finding type's supported assets and scan settings.

Elb Deletion Protection Enabled

Category name in the API: ELB_DELETION_PROTECTION_ENABLED

Checks whether an Application Load Balancer has deletion protection enabled. The control fails if deletion protection is not configured.

Enable deletion protection to protect your Application Load Balancer from deletion.

Recommendation: Application Load Balancer deletion protection should be enabled

To remediate this finding, complete the following steps:

AWS Console

To prevent your load balancer from being deleted accidentally, you can enable deletion protection. By default, deletion protection is disabled for your load balancer.

If you enable deletion protection for your load balancer, you must disable delete protection before you can delete the load balancer.

To enable deletion protection from the console.

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. On the navigation pane, under LOAD BALANCING, choose Load Balancers.
  3. Choose the load balancer.
  4. On the Description tab, choose Edit attributes.
  5. On the Edit load balancer attributes page, select Enable for Delete Protection, and then choose Save.
  6. Choose Save.

Learn about this finding type's supported assets and scan settings.

Elb Logging Enabled

Category name in the API: ELB_LOGGING_ENABLED

This checks whether the Application Load Balancer and the Classic Load Balancer have logging enabled. The control fails if access_logs.s3.enabled is false.

Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues.

To learn more, see Access logs for your Classic Load Balancer in User Guide for Classic Load Balancers.

Recommendation: Checks whether classic and application load balancers have logging enabled

To remediate this finding, complete the following steps:

AWS Console

To remediate this issue, update your load balancers to enable logging.

To enable access logs

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. In the navigation pane, choose Load balancers.
  3. Choose an Application Load Balancer or Classic Load Balancer.
  4. From Actions, choose Edit attributes.
  5. Under Access logs, choose Enable.
  6. Enter your S3 location. This location can exist or it can be created for you. If you do not specify a prefix, the access logs are stored in the root of the S3 bucket.
  7. Choose Save.

Learn about this finding type's supported assets and scan settings.

Elb Tls Https Listeners Only

Category name in the API: ELB_TLS_HTTPS_LISTENERS_ONLY

This check ensures all Classic Load Balancers are configured to use secure communication.

A listener is a process that checks for connection requests. It is configured with a protocol and a port for front-end (client to load balancer) connections and a protocol and a port for back-end (load balancer to instance) connections. For information about the ports, protocols, and listener configurations supported by Elastic Load Balancing, see Listeners for your Classic Load Balancer.

Recommendation: Checks that all Classic Load Balancer are configured with SSL or HTTPS listeners

Learn about this finding type's supported assets and scan settings.

Encrypted Volumes

Category name in the API: ENCRYPTED_VOLUMES

Checks whether the EBS volumes that are in an attached state are encrypted. To pass this check, EBS volumes must be in use and encrypted. If the EBS volume is not attached, then it is not subject to this check.

For an added layer of security of your sensitive data in EBS volumes, you should enable EBS encryption at rest. Amazon EBS encryption offers a straightforward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure. It uses KMS keys when creating encrypted volumes and snapshots.

To learn more about Amazon EBS encryption, see Amazon EBS encryption in the Amazon EC2 User Guide for Linux Instances.

Recommendation: Attached Amazon EBS volumes should be encrypted at-rest

To remediate this finding, complete the following steps:

AWS Console

There is no direct way to encrypt an existing unencrypted volume or snapshot. You can only encrypt a new volume or snapshot when you create it.

If you enabled encryption by default, Amazon EBS encrypts the resulting new volume or snapshot using your default key for Amazon EBS encryption. Even if you have not enabled encryption by default, you can enable encryption when you create an individual volume or snapshot. In both cases, you can override the default key for Amazon EBS encryption and choose a symmetric customer managed key.

For more information, see Creating an Amazon EBS volume and Copying an Amazon EBS snapshot in the Amazon EC2 User Guide for Linux Instances.

Learn about this finding type's supported assets and scan settings.

Encryption At Rest Enabled Rds Instances

Category name in the API: ENCRYPTION_AT_REST_ENABLED_RDS_INSTANCES

Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance.

Recommendation: Ensure that encryption-at-rest is enabled for RDS Instances

To remediate this finding, complete the following steps:

AWS console

  1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/.
  2. In the left navigation panel, click on Databases
  3. Select the Database instance that needs to be encrypted.
  4. Click on Actions button placed at the top right and select Take Snapshot.
  5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the Snapshot Name field and click on Take Snapshot.
  6. Select the newly created snapshot and click on the Action button placed at the top right and select Copy snapshot from the Action menu.
  7. On the Make Copy of DB Snapshot page, perform the following:
  • In the New DB Snapshot Identifier field, Enter a name for the new snapshot.
  • Check Copy Tags, New snapshot must have the same tags as the source snapshot.
  • Select Yes from the Enable Encryption dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list.
  1. Click Copy Snapshot to create an encrypted copy of the selected instance snapshot.
  2. Select the new Snapshot Encrypted Copy and click on the Action button placed at the top right and select Restore Snapshot button from the Action menu, This will restore the encrypted snapshot to a new database instance.
  3. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field.
  4. Review the instance configuration details and click Restore DB Instance.
  5. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance.

AWS CLI

  1. Run describe-db-instances command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier. aws rds describe-db-instances --region <region-name> --query 'DBInstances[*].DBInstanceIdentifier'
  2. Run create-db-snapshot command to create a snapshot for the selected database instance, The command output will return the new snapshot with name DB Snapshot Name. aws rds create-db-snapshot --region <region-name> --db-snapshot-identifier <DB-Snapshot-Name> --db-instance-identifier <DB-Name>
  3. Now run list-aliases command to list the KMS keys aliases available in a specified region, The command output should return each key alias currently available. For our RDS encryption activation process, locate the ID of the AWS default KMS key. aws kms list-aliases --region <region-name>
  4. Run copy-db-snapshot command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the encrypted instance snapshot configuration. aws rds copy-db-snapshot --region <region-name> --source-db-snapshot-identifier <DB-Snapshot-Name> --target-db-snapshot-identifier <DB-Snapshot-Name-Encrypted> --copy-tags --kms-key-id <KMS-ID-For-RDS>
  5. Run restore-db-instance-from-db-snapshot command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration. aws rds restore-db-instance-from-db-snapshot --region <region-name> --db-instance-identifier <DB-Name-Encrypted> --db-snapshot-identifier <DB-Snapshot-Name-Encrypted>
  6. Run describe-db-instances command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted. aws rds describe-db-instances --region <region-name> --query 'DBInstances[*].DBInstanceIdentifier'
  7. Run again describe-db-instances command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status True. aws rds describe-db-instances --region <region-name> --db-instance-identifier <DB-Name-Encrypted> --query 'DBInstances[*].StorageEncrypted'

Learn about this finding type's supported assets and scan settings.

Encryption Enabled Efs File Systems

Category name in the API: ENCRYPTION_ENABLED_EFS_FILE_SYSTEMS

EFS data should be encrypted at rest using AWS KMS (Key Management Service).

Recommendation: Ensure that encryption is enabled for EFS file systems

To remediate this finding, complete the following steps:

AWS console

  1. Login to the AWS Management Console and Navigate to Elastic File System (EFS) dashboard.
  2. Select File Systems from the left navigation panel.
  3. Click Create File System button from the dashboard top menu to start the file system setup process.
  4. On the Configure file system access configuration page, perform the following actions.
  5. Choose the right VPC from the VPC dropdown list.
  6. Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets.
  7. Click Next step to continue.

  8. Perform the following on the Configure optional settings page.

  9. Create tags to describe your new file system.

  10. Choose performance mode based on your requirements.

  11. Check Enable encryption checkbox and choose aws/elasticfilesystem from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS.

  12. Click Next step to continue.

  13. Review the file system configuration details on the review and create page and then click Create File System to create your new AWS EFS file system.

  14. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system.

  15. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed.

  16. Change the AWS region from the navigation bar and repeat the entire process for other aws regions.

From CLI: 1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource):

aws efs describe-file-systems --region <region> --file-system-id <file-system-id from audit section step 2 output>
  1. The command output should return the requested configuration information.
  2. To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-file-system command. To create the required token, you can use a randomly generated UUID from "https://www.uuidgenerator.net".
  3. Run create-file-system command using the unique token created at the previous step. aws efs create-file-system --region <region> --creation-token <Token (randomly generated UUID from step 3)> --performance-mode generalPurpose --encrypted
  4. The command output should return the new file system configuration metadata.
  5. Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target:
aws efs create-mount-target --region <region> --file-system-id <file-system-id> --subnet-id <subnet-id>
  1. The command output should return the new mount target metadata.
  2. Now you can mount your file system from an EC2 instance.
  3. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system.
  4. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. aws efs delete-file-system --region <region> --file-system-id <unencrypted-file-system-id>
  5. Change the AWS region by updating the --region and repeat the entire process for other aws regions.

Learn about this finding type's supported assets and scan settings.

Iam Password Policy

Category name in the API: IAM_PASSWORD_POLICY

AWS allows for custom password policies on your AWS account to specify complexity requirements and mandatory rotation periods for your IAM users' passwords. If you don't set a custom password policy, IAM user passwords must meet the default AWS password policy. AWS security best practices recommends the following password complexity requirements:

  • Require at least one uppercase character in password.
  • Require at least one lowercase character in passwords.
  • Require at least one symbol in passwords.
  • Require at least one number in passwords.
  • Require a minimum password length of at least 14 characters.
  • Require at least 24 passwords before allowing reuse.
  • Require at least 90 before password expiration

This controls checks all of the specified password policy requirements.

Recommendation: Checks whether the account password policy for IAM users meets the specified requirements

To remediate this finding, complete the following steps:

Terraform

resource "aws_iam_account_password_policy" "strict" {
  allow_users_to_change_password = true
  require_uppercase_characters   = true
  require_lowercase_characters   = true
  require_symbols                = true
  require_numbers                = true
  minimum_password_length        = 14
  password_reuse_prevention      = 24
  max_password_age               = 90
}

AWS Console

To create a custom password policy

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Account settings.
  3. In the Password policy section, choose Change password policy.
  4. Select the options that you want to apply to your password policy and choose Save changes.

To change a custom password policy

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Account settings.
  3. In the Password policy section, choose Change.
  4. Select the options that you want to apply to your password policy and choose Save changes.

AWS CLI

aws iam update-account-password-policy \
--allow-users-to-change-password \
--require-uppercase-characters \
--require-lowercase-characters \
--require-symbols \
--require-numbers \
--minimum-password-length 14 \
--password-reuse-prevention 24 \
--max-password-age 90

Learn about this finding type's supported assets and scan settings.

Iam Password Policy Prevents Password Reuse

Category name in the API: IAM_PASSWORD_POLICY_PREVENTS_PASSWORD_REUSE

IAM password policies can prevent the reuse of a given password by the same user. It is recommended that the password policy prevent the reuse of passwords.

Recommendation: Ensure IAM password policy prevents password reuse

To remediate this finding, complete the following steps:

AWS console

  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)
  2. Go to IAM Service on the AWS Console
  3. Click on Account Settings on the Left Pane
  4. Check "Prevent password reuse"
  5. Set "Number of passwords to remember" is set to 24

AWS CLI

 aws iam update-account-password-policy --password-reuse-prevention 24

Learn about this finding type's supported assets and scan settings.

Iam Password Policy Requires Minimum Length 14 Greater

Category name in the API: IAM_PASSWORD_POLICY_REQUIRES_MINIMUM_LENGTH_14_GREATER

Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. It is recommended that the password policy require a minimum password length 14.

Recommendation: Ensure IAM password policy requires minimum length of 14 or greater

To remediate this finding, complete the following steps:

AWS console

  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)
  2. Go to IAM Service on the AWS Console
  3. Click on Account Settings on the Left Pane
  4. Set "Minimum password length" to 14 or greater.
  5. Click "Apply password policy"

AWS CLI

 aws iam update-account-password-policy --minimum-password-length 14

Learn about this finding type's supported assets and scan settings.

Iam Policies Allow Full Administrative Privileges Attached

Category name in the API: IAM_POLICIES_ALLOW_FULL_ADMINISTRATIVE_PRIVILEGES_ATTACHED

IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant least privilege -that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges.

Recommendation: Ensure IAM policies that allow full ":" administrative privileges are not attached

To remediate this finding, complete the following steps:

AWS console

Perform the following to detach the policy that has full administrative privileges:

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, click Policies and then search for the policy name found in the audit step.
  3. Select the policy that needs to be deleted.
  4. In the policy action menu, select first Detach
  5. Select all Users, Groups, Roles that have this policy attached
  6. Click Detach Policy
  7. In the policy action menu, select Detach

AWS CLI

Perform the following to detach the policy that has full administrative privileges as found in the audit step:

  1. Lists all IAM users, groups, and roles that the specified managed policy is attached to.
 aws iam list-entities-for-policy --policy-arn <policy_arn>
  1. Detach the policy from all IAM Users:
 aws iam detach-user-policy --user-name <iam_user> --policy-arn <policy_arn>
  1. Detach the policy from all IAM Groups:
 aws iam detach-group-policy --group-name <iam_group> --policy-arn <policy_arn>
  1. Detach the policy from all IAM Roles:
 aws iam detach-role-policy --role-name <iam_role> --policy-arn <policy_arn>

Learn about this finding type's supported assets and scan settings.

Iam Users Receive Permissions Groups

Category name in the API: IAM_USERS_RECEIVE_PERMISSIONS_GROUPS

IAM users are granted access to services, functions, and data through IAM policies. There are four ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy; 4) add the user to an IAM group that has an inline policy.

Only the third implementation is recommended.

Recommendation: Ensure IAM Users Receive Permissions Only Through Groups

Learn about this finding type's supported assets and scan settings.

Iam User Group Membership Check

Category name in the API: IAM_USER_GROUP_MEMBERSHIP_CHECK

IAM users should always be part of an IAM group in order to adhere to IAM security best practices.

By adding users to a group, it is possible to share policies among types of users.

Recommendation: Checks whether IAM users are members of at least one IAM group

To remediate this finding, complete the following steps:

Terraform

resource "aws_iam_user" "example" {
  name = "test-iam-user"
  path = "/users/dev/"
}

resource "aws_iam_group" "example" {
  name = "Developers"
  path = "/users/dev/"
}

resource "aws_iam_user_group_membership" "example" {
  user   = aws_iam_user.example.name
  groups = [aws_iam_group.example.name]
}

AWS Console

When you use the AWS Management Console to delete an IAM user, IAM automatically deletes the following information for you:

  1. The user
  2. Any user group memberships—that is, the user is removed from any IAM user groups that the user was a member of
  3. Any password associated with the user
  4. Any access keys belonging to the user
  5. All inline policies embedded in the user (policies that are applied to a user via user group permissions are not affected)

To delete an IAM user:

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Users, and then select the check box next to the user name that you want to delete.
  3. At the top of the page, choose Delete.
  4. In the confirmation dialog box, enter the username in the text input field to confirm the deletion of the user.
  5. Choose Delete.

To add a user to an IAM user group:

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose User groups and then choose the name of the group.
  3. Choose the Users tab and then choose Add users. Select the check box next to the users you want to add.
  4. Choose Add users.

AWS CLI

Unlike the Amazon Web Services Management Console, when you delete a user programmatically, you must delete the items attached to the user manually, or the deletion fails.

Before attempting to delete a user, remove the following items:

  1. Password ( DeleteLoginProfile )
  2. Access keys ( DeleteAccessKey )
  3. Signing certificate ( DeleteSigningCertificate )
  4. SSH public key ( DeleteSSHPublicKey )
  5. Git credentials ( DeleteServiceSpecificCredential )
  6. Multi-factor authentication (MFA) device ( DeactivateMFADevice , DeleteVirtualMFADevice )
  7. Inline policies ( DeleteUserPolicy )
  8. Attached managed policies ( DetachUserPolicy )
  9. Group memberships ( RemoveUserFromGroup )

To delete a user, after deleting all items attached to the user:

aws iam delete-user \
  --user-name "test-user"

To add an IAM user to an IAM group:

aws iam add-user-to-group \
  --group-name "test-group"
  --user-name "test-user"

Learn about this finding type's supported assets and scan settings.

Iam User Mfa Enabled

Category name in the API: IAM_USER_MFA_ENABLED

Multi-factor authentication (MFA) is a best practice that adds an extra layer of protection on top of user names and passwords. With MFA, when a user signs in to the AWS Management Console, they are required to provide a time-sensitive authentication code, provided by a registered virtual or physical device.

Recommendation: Checks whether the AWS IAM users have multi-factor authentication (MFA) enabled

To remediate this finding, complete the following steps:

Terraform

When it comes to Terraform, there are a few options to remediate the absence of MFA devices. You probably already have a sensible structure for organizing your users into groups and restrictive policies.

The following example shows how to:

  1. Create users.
  2. Create users login profiles with a PGP Public key.
  3. Create group and group policy that allows self management of IAM profile.
  4. Attach users to group.
  5. Create Virtual MFA devices for users.
  6. Provide each user with the output QR Code and password.
variable "users" {
  type = set(string)
  default = [
    "test@example.com",
    "test2@example.com"
  ]
}

resource "aws_iam_user" "test_users" {
  for_each = toset(var.users)
  name     = each.key
}

resource "aws_iam_user_login_profile" "test_users_profile" {
  for_each                = var.users
  user                    = each.key
  # Key pair created using GnuPG, this is the public key
  pgp_key = file("path/to/gpg_pub_key_base64.pem")
  password_reset_required = true
  lifecycle {
    ignore_changes = [
      password_length,
      password_reset_required,
      pgp_key,
    ]
  }
}

resource "aws_iam_virtual_mfa_device" "test_mfa" {
  for_each                = toset(var.users)
  virtual_mfa_device_name = each.key
}

resource "aws_iam_group" "enforce_mfa_group" {
  name = "EnforceMFAGroup"
}

resource "aws_iam_group_membership" "enforce_mfa_group_membership" {
  name  = "EnforceMFAGroupMembership"
  group = aws_iam_group.enforce_mfa_group.name
  users = [for k in aws_iam_user.test_users : k.name]
}

resource "aws_iam_group_policy" "enforce_mfa_policy" {
  name   = "EnforceMFAGroupPolicy"
  group  = aws_iam_group.enforce_mfa_group.id
  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
        "Sid": "AllowViewAccountInfo",
        "Effect": "Allow",
        "Action": [
            "iam:GetAccountPasswordPolicy",
            "iam:ListVirtualMFADevices"
        ],
        "Resource": "*"
    },
    {
        "Sid": "AllowManageOwnPasswords",
        "Effect": "Allow",
        "Action": [
            "iam:ChangePassword",
            "iam:GetUser"
        ],
        "Resource": "arn:aws:iam::*:user/$${aws:username}"
    },
    {
        "Sid": "AllowManageOwnAccessKeys",
        "Effect": "Allow",
        "Action": [
            "iam:CreateAccessKey",
            "iam:DeleteAccessKey",
            "iam:ListAccessKeys",
            "iam:UpdateAccessKey"
        ],
        "Resource": "arn:aws:iam::*:user/$${aws:username}"
    },
    {
        "Sid": "AllowManageOwnSigningCertificates",
        "Effect": "Allow",
        "Action": [
            "iam:DeleteSigningCertificate",
            "iam:ListSigningCertificates",
            "iam:UpdateSigningCertificate",
            "iam:UploadSigningCertificate"
        ],
        "Resource": "arn:aws:iam::*:user/$${aws:username}"
    },
    {
        "Sid": "AllowManageOwnSSHPublicKeys",
        "Effect": "Allow",
        "Action": [
            "iam:DeleteSSHPublicKey",
            "iam:GetSSHPublicKey",
            "iam:ListSSHPublicKeys",
            "iam:UpdateSSHPublicKey",
            "iam:UploadSSHPublicKey"
        ],
        "Resource": "arn:aws:iam::*:user/$${aws:username}"
    },
    {
        "Sid": "AllowManageOwnGitCredentials",
        "Effect": "Allow",
        "Action": [
            "iam:CreateServiceSpecificCredential",
            "iam:DeleteServiceSpecificCredential",
            "iam:ListServiceSpecificCredentials",
            "iam:ResetServiceSpecificCredential",
            "iam:UpdateServiceSpecificCredential"
        ],
        "Resource": "arn:aws:iam::*:user/$${aws:username}"
    },
    {
        "Sid": "AllowManageOwnVirtualMFADevice",
        "Effect": "Allow",
        "Action": [
            "iam:CreateVirtualMFADevice",
            "iam:DeleteVirtualMFADevice"
        ],
        "Resource": "arn:aws:iam::*:mfa/$${aws:username}"
    },
    {
        "Sid": "AllowManageOwnUserMFA",
        "Effect": "Allow",
        "Action": [
            "iam:DeactivateMFADevice",
            "iam:EnableMFADevice",
            "iam:ListMFADevices",
            "iam:ResyncMFADevice"
        ],
        "Resource": "arn:aws:iam::*:user/$${aws:username}"
    },
    {
        "Sid": "DenyAllExceptListedIfNoMFA",
        "Effect": "Deny",
        "NotAction": [
            "iam:CreateVirtualMFADevice",
            "iam:EnableMFADevice",
            "iam:GetUser",
            "iam:ListMFADevices",
            "iam:ListVirtualMFADevices",
            "iam:ResyncMFADevice",
            "sts:GetSessionToken"
        ],
        "Resource": "*",
        "Condition": {
            "BoolIfExists": {
                "aws:MultiFactorAuthPresent": "false"
            }
        }
    }
  ]
}
POLICY
}

output "user_password_map" {
  # Outputs a map in the format {"test@example.com": <PGPEncryptedPassword>, "test2@example.com": <PGPEncryptedPassword>}
  value = { for k, v in aws_iam_user_login_profile.test_users_profile : k => v.password }
}

output "user_qr_map" {
  # Outputs a map in the format {"test@example.com": <QRCode>, "test2@example.com": <QRCode>}
  value = { for k, v in aws_iam_virtual_mfa_device.test_mfa : k => v.qr_code_png }
}

AWS Console

To enable MFA for any user accounts with AWS console access, see Enabling a virtual multi-factor authentication (MFA) device (console) in the AWS documentation.

**

AWS CLI

Create an MFA device

aws iam create-virtual-mfa-device \
  --virtual-mfa-device-name "test@example.com" \
  --outfile ./QRCode.png \
  --bootstrap-method QRCodePNG

Enable MFA device for existing user

aws iam enable-mfa-device \
  --user-name "test@example.com" \
  --serial-number "arn:aws:iam::123456976749:mfa/test@example.com" \
  --authentication-code1 123456 \
  --authentication-code2 654321

Learn about this finding type's supported assets and scan settings.

Iam User Unused Credentials Check

Category name in the API: IAM_USER_UNUSED_CREDENTIALS_CHECK

This checks for any IAM passwords or active access keys that have not been used in the last 90 days.

Best practices recommends that you remove, deactivate or rotate all credentials unused for 90 days or more. This reduces the window of opportunity for credentials associated to a compromised or abandoned account to be used.

Recommendation: Checks that all AWS IAM users have passwords or active access keys that have not been used in maxCredentialUsageAge days (default 90)

To remediate this finding, complete the following steps:

Terraform

In order to remove expired Access Keys created via Terraform, remove the aws_iam_access_key resource from your module and apply the change.

In order to reset an IAM user login password, use the -replace when running terraform apply.

Supposing the following user login profile

resource "aws_iam_user" "example" {
  name          = "test@example.com"
  path          = "/users/"
  force_destroy = true
}

resource "aws_iam_user_login_profile" "example" {
  user    = aws_iam_user.example.name
  pgp_key = "keybase:some_person_that_exists"
}

Run the following command to reset the user's login profile password

terraform apply -replace="aws_iam_user_login_profile.example"

AWS Console

To disable credentials for inactive accounts:

  1. Open the IAM console at https://console.aws.amazon.com/iam/.
  2. Choose Users.
  3. Choose the name of the user that has credentials over 90 days old/last used.
  4. Choose Security credentials.
  5. For each sign-in credential and access key that hasn't been used in at least 90 days, choose Make inactive.

To require a new password from console users on next login:

  1. Open the IAM console at https://console.aws.amazon.com/iam/.
  2. Choose Users.
  3. Choose the name of the user that has credentials over 90 days old/last used.
  4. Choose Security credentials.
  5. Under Sign-in credentials and console password, choose Manage.
  6. Set a new password (autogenerated or custom).
  7. Check the box to Require password reset.
  8. Choose Apply.

AWS CLI

To make Access Keys inactive

aws iam update-access-key \
  --access-key-id <value> \
  --status "Inactive"

To delete Access Keys

aws iam delete-access-key \
  --access-key-id <value>

To reset a user login profile password

aws iam update-login-profile \
  --user-name "test@example.com" \
  --password <temporary_password> \
  --password-reset-required

Learn about this finding type's supported assets and scan settings.

Kms Cmk Not Scheduled For Deletion

Category name in the API: KMS_CMK_NOT_SCHEDULED_FOR_DELETION

This control checks whether KMS keys are scheduled for deletion. The control fails if a KMS key is scheduled for deletion.

KMS keys cannot be recovered once deleted. Data encrypted under a KMS key is also permanently unrecoverable if the KMS key is deleted. If meaningful data has been encrypted under a KMS key scheduled for deletion, consider decrypting the data or re-encrypting the data under a new KMS key unless you are intentionally performing a cryptographic erasure.

When a KMS key is scheduled for deletion, a mandatory waiting period is enforced to allow time to reverse the deletion, if it was scheduled in error. The default waiting period is 30 days, but it can be reduced to as short as 7 days when the KMS key is scheduled for deletion. During the waiting period, the scheduled deletion can be canceled and the KMS key will not be deleted.

For additional information regarding deleting KMS keys, see Deleting KMS keys in the AWS Key Management Service Developer Guide.

Recommendation: Checks that all CMKs are not scheduled for deletion

Learn about this finding type's supported assets and scan settings.

Lambda Concurrency Check

Category name in the API: LAMBDA_CONCURRENCY_CHECK

Checks if the Lambda function is configured with a function-level concurrent execution limit. The rule is NON_COMPLIANT if the Lambda function is not configured with a function-level concurrent execution limit.

Recommendation: Checks whether Lambda functions are configured with function-level concurrent execution limit

Learn about this finding type's supported assets and scan settings.

Lambda Dlq Check

Category name in the API: LAMBDA_DLQ_CHECK

Checks if a Lambda function is configured with a dead-letter queue. The rule is NON_COMPLIANT if the Lambda function is not configured with a dead-letter queue.

Recommendation: Checks whether Lambda functions are configured with a dead letter queue

Learn about this finding type's supported assets and scan settings.

Lambda Function Public Access Prohibited

Category name in the API: LAMBDA_FUNCTION_PUBLIC_ACCESS_PROHIBITED

AWS best practices recommend that Lambda function should not be publicly exposed. This policy checks all Lambda functions deployed across all enabled regions within your account and will fail if they are configured ot allow public access.

Recommendation: Checks whether the policy attached to the Lambda function prohibits public access

To remediate this finding, complete the following steps:

Terraform

The following example provides an example of using Terraform to provision an IAM role restricting access to a Lambda function and attaching that role to the Lambda function

resource "aws_iam_role" "iam_for_lambda" {
  name = "iam_for_lambda"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

resource "aws_lambda_function" "test_lambda" {
  filename      = "lambda_function_payload.zip"
  function_name = "lambda_function_name"
  role          = aws_iam_role.iam_for_lambda.arn
  handler       = "index.test"

  source_code_hash = filebase64sha256("lambda_function_payload.zip")

  runtime = "nodejs12.x"

}

AWS Console

If a Lambda function fails this control, it indicates that the resource-based policy statement for the Lambda function allows public access.

To remediate the issue, you must update the policy to remove the permissions or to add the AWS:SourceAccount condition. You can only update the resource-based policy from the Lambda API.

The following instructions use the console to review the policy and the AWS Command Line Interface to remove the permissions.

To view the resource-based policy for a Lambda function

  1. Open the AWS Lambda console at https://console.aws.amazon.com/lambda/.
  2. In the navigation pane, choose Functions.
  3. Choose the function.
  4. Choose Permissions. The resource-based policy shows the permissions that are applied when another account or AWS service attempts to access the function.
  5. Examine the resource-based policy.
  6. Identify the policy statement that has Principal field values that make the policy public. For example, allowing "*" or { "AWS": "*" }.

You cannot edit the policy from the console. To remove permissions from the function, you use the remove-permission command from the AWS CLI.

Note the value of the statement ID (Sid) for the statement that you want to remove.

AWS CLI

To use the CLI to remove permissions from a Lambda function, issue the remove-permission command as follows.

aws lambda remove-permission \
--function-name <value> \
--statement-id <value>

Learn about this finding type's supported assets and scan settings.

Lambda Inside Vpc

Category name in the API: LAMBDA_INSIDE_VPC

Checks whether a Lambda function is in a VPC. You might see failed findings for Lambda@Edge resources.

It does not evaluate the VPC subnet routing configuration to determine public reachability.

Recommendation: Checks whether the Lambda functions exists within a VPC

To remediate this finding, complete the following steps:

AWS Console

To configure a function to connect to private subnets in a virtual private cloud (VPC) in your account:

  1. Open the AWS Lambda console at https://console.aws.amazon.com/lambda/.
  2. Navigate to Functions and then select your Lambda function.
  3. Scroll to Network and then select a VPC with the connectivity requirements of the function.
  4. To run your functions in high availability mode, Security Hub recommends that you choose at least two subnets.
  5. Choose at least one security group that has the connectivity requirements of the function.
  6. Choose Save.

For more information see the section on configuring a Lambda function to access resources in a VPC in the AWS Lambda Developer Guide.

Learn about this finding type's supported assets and scan settings.

Mfa Delete Enabled S3 Buckets

Category name in the API: MFA_DELETE_ENABLED_S3_BUCKETS

Once MFA Delete is enabled on your sensitive and classified S3 bucket it requires the user to have two forms of authentication.

Recommendation: Ensure MFA Delete is enabled on S3 buckets

Learn about this finding type's supported assets and scan settings.

Mfa Enabled Root User Account

Category name in the API: MFA_ENABLED_ROOT_USER_ACCOUNT

The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.

Note: When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. ("non-personal virtual MFA") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company.

Recommendation: Ensure MFA is enabled for the 'root' user account

Learn about this finding type's supported assets and scan settings.

Multi Factor Authentication Mfa Enabled All Iam Users Console

Category name in the API: MULTI_FACTOR_AUTHENTICATION_MFA_ENABLED_ALL_IAM_USERS_CONSOLE

Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional credentials. With MFA enabled, when a user signs in to the AWS Console, they will be prompted for their user name and password as well as for an authentication code from their physical or virtual MFA token. It is recommended that MFA be enabled for all accounts that have a console password.

Recommendation: Ensure multi-factor authentication (MFA) is enabled for all IAM users that have a console password

To remediate this finding, complete the following steps:

AWS console

  1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/'
  2. In the left pane, select Users.
  3. In the User Name list, choose the name of the intended MFA user.
  4. Choose the Security Credentials tab, and then choose Manage MFA Device.
  5. In the Manage MFA Device wizard, choose Virtual MFA device, and then choose Continue.

    IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.

  6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).

  7. Determine whether the MFA app supports QR codes, and then do one of the following:

    • Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.
    • In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.

    When you are finished, the virtual MFA device starts generating one-time passwords.

  8. In the Manage MFA Device wizard, in the MFA Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the MFA Code 2 box.

  9. Click Assign MFA.

Learn about this finding type's supported assets and scan settings.

No Network Acls Allow Ingress 0 0 0 0 Remote Server Administration

Category name in the API: NO_NETWORK_ACLS_ALLOW_INGRESS_0_0_0_0_REMOTE_SERVER_ADMINISTRATION

The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17) or ALL (-1) protocols

Recommendation: Ensure no Network ACLs allow ingress from 0.0.0.0/0 to remote server administration ports

To remediate this finding, complete the following steps:

AWS console

Perform the following:

  1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home
  2. In the left pane, click Network ACLs
  3. For each network ACL to remediate, perform the following:
    • Select the network ACL
    • Click the Inbound Rules tab
    • Click Edit inbound rules
    • Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click Delete to remove the offending inbound rule
    • Click Save

Learn about this finding type's supported assets and scan settings.

No Root User Account Access Key Exists

Category name in the API: NO_ROOT_USER_ACCOUNT_ACCESS_KEY_EXISTS

The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be deleted.

Recommendation: Ensure no 'root' user account access key exists

To remediate this finding, complete the following steps:

AWS console

  1. Sign in to the AWS Management Console as 'root' and open the IAM console at https://console.aws.amazon.com/iam/.
  2. Click on <root_account> at the top right and select My Security Credentials from the drop down list.
  3. On the pop out screen Click on Continue to Security Credentials.
  4. Click on Access Keys (Access Key ID and Secret Access Key).
  5. Under the Status column (if there are any Keys which are active).
  6. Click Delete (Note: Deleted keys cannot be recovered).

Learn about this finding type's supported assets and scan settings.

No Security Groups Allow Ingress 0 0 0 0 Remote Server Administration

Category name in the API: NO_SECURITY_GROUPS_ALLOW_INGRESS_0_0_0_0_REMOTE_SERVER_ADMINISTRATION

Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17) or ALL (-1) protocols

Recommendation: Ensure no security groups allow ingress from 0.0.0.0/0 to remote server administration ports

Learn about this finding type's supported assets and scan settings.

No Security Groups Allow Ingress 0 Remote Server Administration

Category name in the API: NO_SECURITY_GROUPS_ALLOW_INGRESS_0_REMOTE_SERVER_ADMINISTRATION

Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389.

Recommendation: Ensure no security groups allow ingress from ::/0 to remote server administration ports

Learn about this finding type's supported assets and scan settings.

One Active Access Key Available Any Single Iam User

Category name in the API: ONE_ACTIVE_ACCESS_KEY_AVAILABLE_ANY_SINGLE_IAM_USER

Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK)

Recommendation: Ensure there is only one active access key available for any single IAM user

To remediate this finding, complete the following steps:

AWS console

  1. Sign in to the AWS Management Console and navigate to IAM dashboard at https://console.aws.amazon.com/iam/.
  2. In the left navigation panel, choose Users.
  3. Click on the IAM user name that you want to examine.
  4. On the IAM user configuration page, select Security Credentials tab.
  5. In Access Keys section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.
  6. In the same Access Keys section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the Make Inactive link.
  7. If you receive the Change Key Status confirmation box, click Deactivate to switch off the selected key.
  8. Repeat steps no. 3 – 7 for each IAM user in your AWS account.

AWS CLI

  1. Using the IAM user and access key information provided in the Audit CLI, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.

  2. Run the update-access-key command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user

Note - the command does not return any output:

aws iam update-access-key --access-key-id <access-key-id> --status Inactive --user-name <user-name>
  1. To confirm that the selected access key pair has been successfully deactivated run the list-access-keys audit command again for that IAM User:
aws iam list-access-keys --user-name <user-name>
  • The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) Status is set to Inactive, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation.
  1. Repeat steps no. 1 – 3 for each IAM user in your AWS account.

Learn about this finding type's supported assets and scan settings.

Public Access Given Rds Instance

Category name in the API: PUBLIC_ACCESS_GIVEN_RDS_INSTANCE

Ensure and verify that RDS database instances provisioned in your AWS account do restrict unauthorized access in order to minimize security risks. To restrict access to any publicly accessible RDS database instance, you must disable the database Publicly Accessible flag and update the VPC security group associated with the instance.

Recommendation: Ensure that public access is not given to RDS Instance

To remediate this finding, complete the following steps:

AWS console

  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.
  2. Under the navigation panel, On RDS Dashboard, click Databases.
  3. Select the RDS instance that you want to update.
  4. Click Modify from the dashboard top menu.
  5. On the Modify DB Instance panel, under the Connectivity section, click on Additional connectivity configuration and update the value for Publicly Accessible to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations:
  6. Select the Connectivity and security tab, and click on the VPC attribute value inside the Networking section.
  7. Select the Details tab from the VPC dashboard bottom panel and click on Route table configuration attribute value.
  8. On the Route table details page, select the Routes tab from the dashboard bottom panel and click on Edit routes.
  9. On the Edit routes page, update the Destination of Target which is set to igw-xxxxx and click on Save routes.
  10. On the Modify DB Instance panel Click on Continue and In the Scheduling of modifications section, perform one of the following actions based on your requirements:
  11. Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window.
  12. Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application.
  13. Repeat steps 3 to 6 for each RDS instance available in the current region.
  14. Change the AWS region from the navigation bar to repeat the process for other regions.

AWS CLI

  1. Run describe-db-instances command to list all RDS database names identifiers, available in the selected AWS region:
aws rds describe-db-instances --region <region-name> --query 'DBInstances[*].DBInstanceIdentifier'
  1. The command output should return each database instance identifier.
  2. Run modify-db-instance command to modify the selected RDS instance configuration. Then use the following command to disable the Publicly Accessible flag for the selected RDS instances. This command use the apply-immediately flag. If you want to avoid any downtime --no-apply-immediately flag can be used:
aws rds modify-db-instance --region <region-name> --db-instance-identifier <db-name> --no-publicly-accessible --apply-immediately
  1. The command output should reveal the PubliclyAccessible configuration under pending values and should get applied at the specified time.
  2. Updating the Internet Gateway Destination via AWS CLI is not currently supported To update information about Internet Gateway use the AWS Console Procedure.
  3. Repeat steps 1 to 5 for each RDS instance provisioned in the current region.
  4. Change the AWS region by using the --region filter to repeat the process for other regions.

Learn about this finding type's supported assets and scan settings.

Rds Enhanced Monitoring Enabled

Category name in the API: RDS_ENHANCED_MONITORING_ENABLED

Enhanced monitoring provides real-time metrics on the operating system that the RDS instance runs on, via an agent installed in the instance.

For more details, see Monitoring OS metrics with Enhanced Monitoring.

Recommendation: Checks whether enhanced monitoring is enabled for all RDS DB instances

To remediate this finding, complete the following steps:

Terraform

To remediate this control, enable Enhanced Monitoring on your RDS instances as follows:

Create an IAM Role for RDS:

resource "aws_iam_role" "rds_logging" {
  name = "CustomRoleForRDSMonitoring"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Sid    = "CustomRoleForRDSLogging"
        Principal = {
          Service = "monitoring.rds.amazonaws.com"
        }
      },
    ]
  })
}

Retrieve the AWS Managed Policy for RDS Enhanced Monitoring:

data "aws_iam_policy" "rds_logging" {
  name = "AmazonRDSEnhancedMonitoringRole"
}

Attach the policy to the role:

resource "aws_iam_policy_attachment" "rds_logging" {
  name       = "AttachRdsLogging"
  roles      = [aws_iam_role.rds_logging.name]
  policy_arn = data.aws_iam_policy.rds_logging.arn
}

Define a monitoring interval and a monitoring role arn to the violarting RDS instance to enable Enhanced Monitoring:

resource "aws_db_instance" "default" {
  identifier           = "test-rds"
  allocated_storage    = 10
  engine               = "mysql"
  engine_version       = "5.7"
  instance_class       = "db.t3.micro"
  db_name              = "mydb"
  username             = "foo"
  password             = "foobarbaz"
  parameter_group_name = "default.mysql5.7"
  skip_final_snapshot  = true
  monitoring_interval  = 60
  monitoring_role_arn  = aws_iam_role.rds_logging.arn
}

AWS Console

You can turn on Enhanced Monitoring when you create a DB instance, Multi-AZ DB cluster, or read replica, or when you modify a DB instance or Multi-AZ DB cluster. If you modify a DB instance to turn on Enhanced Monitoring, you don't need to reboot your DB instance for the change to take effect.

You can turn on Enhanced Monitoring in the RDS console when you do one of the following actions in the Databases page:

  • Create a DB instance or Multi-AZ DB cluster - Choose Create database.
  • Create a read replica - Choose Actions, then Create read replica.
  • Modify a DB instance or Multi-AZ DB cluster - Choose Modify.

To turn Enhanced Monitoring on or off in the RDS console

  1. Scroll to Additional configuration.
  2. In Monitoring, choose Enable Enhanced Monitoring for your DB instance or read replica. To turn Enhanced Monitoring off, choose Disable Enhanced Monitoring.
  3. Set the Monitoring Role property to the IAM role that you created to permit Amazon RDS to communicate with Amazon CloudWatch Logs for you, or choose Default to have RDS create a role for you named rds-monitoring-role.
  4. Set the Granularity property to the interval, in seconds, between points when metrics are collected for your DB instance or read replica. The Granularity property can be set to one of the following values: 1, 5, 10, 15, 30, or 60. The fastest that the RDS console refreshes is every 5 seconds. If you set the granularity to 1 second in the RDS console, you still see updated metrics only every 5 seconds. You can retrieve 1-second metric updates by using CloudWatch Logs.

AWS CLI

Create the RDS IAM role:

aws iam create-role \
  --role-name "CustomRoleForRDSMonitoring" \
  --assume-role-policy-document file://rds-assume-role.json

Attach the policy AmazonRDSEnhancedMonitoringRole to the role:

aws iam attach-role-policy \
  --role-name "CustomRoleForRDSMonitoring"\
  --policy-arn "arn:aws:iam::aws:policy/service-role/AmazonRDSEnhancedMonitoringRole"

Modify the RDS instance to enable Enhanced Monitoring, by setting --monitoring-interval and --monitoring-role-arn:

aws rds modify-db-instance \
  --db-instance-identifier "test-rds" \
  --monitoring-interval 30 \
  --monitoring-role-arn "arn:aws:iam::<account_id>:role/CustomRoleForRDSMonitoring"

Learn about this finding type's supported assets and scan settings.

Rds Instance Deletion Protection Enabled

Category name in the API: RDS_INSTANCE_DELETION_PROTECTION_ENABLED

Enabling instance deletion protection is an additional layer of protection against accidental database deletion or deletion by an unauthorized entity.

While deletion protection is enabled, an RDS DB instance cannot be deleted. Before a deletion request can succeed, deletion protection must be disabled.

Recommendation: Checks if all RDS instances have deletion protection enabled

To remediate this finding, complete the following steps:

Terraform

In order to remediate this control, set deletion_protection to true in the aws_db_instance resource.

resource "aws_db_instance" "example" {
  # ... other configuration ...
  deletion_protection = true
}

AWS Console

To enable deletion protection for an RDS DB instance

  1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
  2. In the navigation pane, choose Databases, then choose the DB instance that you want to modify.
  3. Choose Modify.
  4. Under Deletion protection, choose Enable deletion protection.
  5. Choose Continue.
  6. Under Scheduling of modifications, choose when to apply modifications. The options are Apply during the next scheduled maintenance window or Apply immediately.
  7. Choose Modify DB Instance.

AWS CLI

Same applies to the AWS CLI. Set --deletion-protection as below.

aws rds modify-db-instance \
  --db-instance-identifier = "test-rds" \
  --deletion-protection

Learn about this finding type's supported assets and scan settings.

Rds In Backup Plan

Category name in the API: RDS_IN_BACKUP_PLAN

This check evaluates if Amazon RDS DB instances are covered by a backup plan. This control fails if an RDS DB instance isn't covered by a backup plan.

AWS Backup is a fully managed backup service that centralizes and automates the backing up of data across AWS services. With AWS Backup, you can create backup policies called backup plans. You can use these plans to define your backup requirements, such as how frequently to back up your data and how long to retain those backups. Including RDS DB instances in a backup plan helps you protect your data from unintended loss or deletion.

Recommendation: RDS DB instances should be covered by a backup plan

To remediate this finding, complete the following steps:

Terraform

In order to remediate this control, set the backup_retention_period to a value greater than 7 in the aws_db_instance resource.

resource "aws_db_instance" "example" {
  # ... other Configuration ...
  backup_retention_period = 7
}

AWS Console

To enable automated backups immediately

  1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
  2. In the navigation pane, choose Databases, and then choose the DB instance that you want to modify.
  3. Choose Modify to open the Modify DB Instance page.
  4. Under Backup Retention Period, choose a positive nonzero value, for example 30 days, then choose Continue.
  5. Select the Scheduling of modifications section and choose when to apply modifications: you can choose to Apply during the next scheduled maintenance window or Apply immediately.
  6. Then, on the confirmation page, choose Modify DB Instance to save your changes and enable automated backups.

AWS CLI

Same applies to the AWS CLI. In order to enable automatic backups, change the backup-retention-period to a value greater than 0 (default).

aws rds modify-db-instance --db-instance-identifier "test-rds" --backup-retention-period 7

Learn about this finding type's supported assets and scan settings.

Rds Logging Enabled

Category name in the API: RDS_LOGGING_ENABLED

This checks whether the following logs of Amazon RDS are enabled and sent to CloudWatch.

RDS databases should have relevant logs enabled. Database logging provides detailed records of requests made to RDS. Database logs can assist with security and access audits and can help to diagnose availability issues.

Recommendation: Checks if exported logs are enabled for all RDS DB instances

To remediate this finding, complete the following steps:

Terraform

resource "aws_db_instance" "example" {
  # ... other configuration for MySQL ...
  enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
  parameter_group_name            = aws_db_parameter_group.example.name
}

resource "aws_db_parameter_group" "example" {
  name   = "${aws_db_instance.example.dbInstanceIdentifier}-parameter-group"
  family = "mysql5.7"

  parameter {
    name  = "general_log"
    value = 1
  }

  parameter {
    name  = "slow_query_log"
    value = 1
  }

  parameter {
    name  = "log_output"
    value = "FILE"
  }
}

For MariaDB, additionally create a custom option group and set option_group_name in the aws_db_instance resource.

resource "aws_db_instance" "example" {
  # ... other configuration for MariaDB ...
  enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
  parameter_group_name            = aws_db_parameter_group.example.name
  option_group_name               = aws_db_option_group.example.name
}

resource "aws_db_option_group" "example" {
  name                     = "mariadb-option-group-for-logs"
  option_group_description = "MariaDB Option Group for Logs"
  engine_name              = "mariadb"
  option {
    option_name = "MARIADB_AUDIT_PLUGIN"
    option_settings {
      name  = "SERVER_AUDIT_EVENTS"
      value = "CONNECT,QUERY,TABLE,QUERY_DDL,QUERY_DML,QUERY_DCL"
    }
  }
}

AWS Console

To create a custom DB parameter group

  1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
  2. In the navigation pane, choose Parameter groups.
  3. Choose Create parameter group.
  4. In the Parameter group family list, choose a DB parameter group family.
  5. In the Type list, choose DB Parameter Group.
  6. In Group name, enter the name of the new DB parameter group.
  7. In Description, enter a description for the new DB parameter group.
  8. Choose Create.

To create a new option group for MariaDB logging by using the console

  1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
  2. In the navigation pane, choose Option groups.
  3. Choose Create group.
  4. In the Create option group window, provide the following:
  5. Name: must be unique within your AWS account. Only letters, digits, and hyphens.
  6. Description: Only used for display purposes.
    • Engine: select your DB engine.
    • Major engine version: select the major version of your DB engine.
  7. Choose Create.
  8. Choose the name of the option group you just created.
  9. Choose Add option.
  10. Choose MARIADB_AUDIT_PLUGIN from the Option name list.
  11. Set SERVER_AUDIT_EVENTS to CONNECT, QUERY, TABLE, QUERY_DDL, QUERY_DML, QUERY_DCL.
  12. Choose Add option.

To publish SQL Server DB, Oracle DB, or PostgreSQL logs to CloudWatch Logs from the AWS Management Console

  1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
  2. In the navigation pane, choose Databases.
  3. Choose the DB instance that you want to modify.
  4. Choose Modify.
  5. Under Log exports, choose all of the log files to start publishing to CloudWatch Logs.
  6. Log exports is available only for database engine versions that support publishing to CloudWatch Logs.
  7. Choose Continue. Then on the summary page, choose Modify DB Instance.

To apply a new DB parameter group or DB options group to an RDS DB instance

  1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
  2. In the navigation pane, choose Databases.
  3. Choose the DB instance that you want to modify.
  4. Choose Modify.
  5. Under Database options, change the DB parameter group and DB options group as needed.
  6. When you finish you changes, choose Continue. Check the summary of modifications.
  7. Choose Modify DB Instance to save your changes.

AWS CLI

Retrieve the engine families and choose the one that matches the DB instance engine and version.

aws rds describe-db-engine-versions \
  --query "DBEngineVersions[].DBParameterGroupFamily" \
  --engine "mysql"

Create a parameter group according to the engine and version.

aws rds create-db-parameter-group \
  --db-parameter-group-name "rds-mysql-parameter-group" \
  --db-parameter-group-family "mysql5.7" \
  --description "Example parameter group for logs"

Create an rds-parameters.json file containing the necessary parameters according to the DB Engine, this example uses MySQL5.7.

[
  {
    "ParameterName": "general_log",
    "ParameterValue": "1",
    "ApplyMethod": "immediate"
  },
  {
    "ParameterName": "slow_query_log",
    "ParameterValue": "1",
    "ApplyMethod": "immediate"
  },
  {
    "ParameterName": "log_output",
    "ParameterValue": "FILE",
    "ApplyMethod": "immediate"
  }
]

Modify the parameter group to add the parameters according to the DB engine. This example uses MySQL5.7

aws rds modify-db-parameter-group \
  --db-parameter-group-name "rds-mysql-parameter-group" \
  --parameters file://rds-parameters.json

Modify the DB instance to associate the parameter group.

aws rds modify-db-instance \
  --db-instance-identifier "test-rds" \
  --db-parameter-group-name "rds-mysql-parameter-group"

Additionally for MariaDB, create an option group as follows.

aws rds create-option-group \
  --option-group-name "rds-mariadb-option-group" \
  --engine-name "mariadb" \
  --major-engine-version "10.6" \
  --option-group-description "Option group for MariaDB logs"

Create a rds-mariadb-options.json file as follows.

{
  "OptionName": "MARIADB_AUDIT_PLUGIN",
  "OptionSettings": [
    {
      "Name": "SERVER_AUDIT_EVENTS",
      "Value": "CONNECT,QUERY,TABLE,QUERY_DDL,QUERY_DML,QUERY_DCL"
    }
  ]
}

Add the option to the option group.

aws rds add-option-to-option-group \
  --option-group-name "rds-mariadb-option-group" \
  --options file://rds-mariadb-options.json

Associate the option group to the DB Instance by modifying the MariaDB instance.

aws rds modify-db-instance \
  --db-instance-identifier "rds-test-mariadb" \
  --option-group-name "rds-mariadb-option-group"

Learn about this finding type's supported assets and scan settings.

Rds Multi Az Support

Category name in the API: RDS_MULTI_AZ_SUPPORT

RDS DB instances should be configured for multiple Availability Zones (AZs). This ensures the availability of the data stored. Multi-AZ deployments allow for automated failover if there is an issue with Availability Zone availability and during regular RDS maintenance.

Recommendation: Checks whether high availability is enabled for all RDS DB instances

To remediate this finding, complete the following steps:

Terraform

In order to remediate this control, set multi_az to true in the aws_db_instance resource.

resource "aws_db_instance" "example" {
  # ... other configuration ...
  multi_az                = true
}

AWS Console

To enable multiple Availability Zones for a DB instance

  1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
  2. In the navigation pane, choose Databases, and then choose the DB instance that you want to modify.
  3. Choose Modify. The Modify DB Instance page appears.
  4. Under Instance Specifications, set Multi-AZ deployment to Yes.
  5. Choose Continue and then check the summary of modifications.
  6. (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option can cause an outage in some cases. For more information, see Using the Apply Immediately setting in the Amazon RDS User Guide.
  7. On the confirmation page, review your changes. If they are correct, choose Modify DB Instance to save your changes.

AWS CLI

The same applies to the AWS CLI. Enable multi Az support by providing the --multi-az option.

modify-db-instance
  --db-instance-identifier "test-rds" \
  --multi-az

Learn about this finding type's supported assets and scan settings.

Redshift Cluster Configuration Check

Category name in the API: REDSHIFT_CLUSTER_CONFIGURATION_CHECK

This checks for essential elements of a Redshift cluster: encryption at rest, logging and node type.

These configuration items are important in the maintenance of a secure and observable Redshift cluster.

Recommendation: Checks that all Redshift clusters have encryption at rest, logging and node type.

To remediate this finding, complete the following steps:

Terraform

resource "aws_kms_key" "redshift_encryption" {
  description         = "Used for Redshift encryption configuration"
  enable_key_rotation = true
}

resource "aws_redshift_cluster" "example" {
  # ... other configuration ...
  encrypted                           = true
  kms_key_id                          = aws_kms_key.redshift_encryption.id
  logging {
    enable               = true
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
}

AWS Console

To enable cluster audit logging

  1. Open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.
  2. In the navigation menu, choose Clusters, then choose the name of the cluster to modify.
  3. Choose Properties.
  4. Choose Edit and Edit audit logging.
  5. Set Configure audit logging to Turn on, set Log export type to CloudWatch (recommended) and choose the logs to export.

In order to use AWS S3 to manage Redshift audit logs, see Redshift - Database audit logging on the AWS Documentation.

  1. Choose Save changes.

To modify database encryption on a cluster

  1. Sign in to the AWS Management Console and open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.
  2. On the navigation menu, choose Clusters, then choose the cluster that you want to modify encryption.
  3. Choose Properties.
  4. Choose Edit and Edit encryption.
  5. Choose the Encryption to use (KMS or HSM) and provide:

    • For KMS: key to use
    • For HSM: connection and client certificate

AWS CLI

  1. Create a KMS key and retrieve the key id
aws kms create-key \
  --description "Key to encrypt Redshift Clusters"
  1. Modify the cluster
aws redshift modify-cluster \
  --cluster-identifiers "test-redshift-cluster" \
  --encrypted \
  --kms-key-id <value>

Learn about this finding type's supported assets and scan settings.

Redshift Cluster Maintenancesettings Check

Category name in the API: REDSHIFT_CLUSTER_MAINTENANCESETTINGS_CHECK

Automatic major version upgrades happen according to the maintenance window

Recommendation: Checks that all Redshift clusters have allowVersionUpgrade enabled and preferredMaintenanceWindow and automatedSnapshotRetentionPeriod set

To remediate this finding, complete the following steps:

Terraform

This check is compliant with all the default values provided by Terraform. In case of a failing Redshift Cluster, review the requirements and remove the default overrides for the following attributes of the aws_redshift_cluster resource.

resource "aws_redshift_cluster" "example" {

  # ...other configuration ...

  # The following values are compliant and set by default if omitted.
  allow_version_upgrade               = true
  preferred_maintenance_window        = "sat:10:00-sat:10:30"
  automated_snapshot_retention_period = 1
}

AWS Console

When creating a Redshift cluster via the AWS console, the default values are already compliant with this control.

For more information, see Managing clusters using the console

AWS CLI

To remediate this control using the AWS CLI, please do as follows:

aws redshift modify-cluster \
  --cluster-identifier "test-redshift-cluster" \
  --allow-version-upgrade

Learn about this finding type's supported assets and scan settings.

Redshift Cluster Public Access Check

Category name in the API: REDSHIFT_CLUSTER_PUBLIC_ACCESS_CHECK

The PubliclyAccessible attribute of the Amazon Redshift cluster configuration indicates whether the cluster is publicly accessible. When the cluster is configured with PubliclyAccessible set to true, it is an Internet-facing instance that has a publicly resolvable DNS name, which resolves to a public IP address.

When the cluster is not publicly accessible, it is an internal instance with a DNS name that resolves to a private IP address. Unless you intend for your cluster to be publicly accessible, the cluster should not be configured with PubliclyAccessible set to true.

Recommendation: Checks whether Redshift clusters are publicly accessible

To remediate this finding, complete the following steps:

Terraform

To remediate this control, it is necessary to modify the redshift cluster resource and set publicly_accessible to false, the default value is true.

resource "aws_redshift_cluster" "example" {
  # ... other configuration ...
  publicly_accessible = false
}

AWS Console

To disable public access to an Amazon Redshift cluster

  1. Open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.
  2. In the navigation menu, choose Clusters, then choose the name of the cluster with the security group to modify.
  3. Choose Actions, then choose Modify publicly accessible setting.
  4. Under Allow instances and devices outside the VPC to connect to your database through the cluster endpoint, choose No.
  5. Choose Confirm.

AWS CLI

Use the modify-cluster command to set --no-publicly-accessible.

aws redshift modify-cluster \
  --cluster-identifier "test-redshift-cluster" \
  --no-publicly-accessible

Learn about this finding type's supported assets and scan settings.

Restricted Common Ports

Category name in the API: RESTRICTED_COMMON_PORTS

This checks whether unrestricted incoming traffic for the security groups is accessible to the specified ports that have the highest risk. This control fails if any of the rules in a security group allow ingress traffic from '0.0.0.0/0' or '::/0' for those ports.

Unrestricted access (0.0.0.0/0) increases opportunities for malicious activity, such as hacking, denial-of-service attacks, and loss of data.

Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. No security group should allow unrestricted ingress access to the following ports:

  • 20, 21 (FTP)
  • 22 (SSH)
  • 23 (Telnet)
  • 25 (SMTP)
  • 110 (POP3)
  • 135 (RPC)
  • 143 (IMAP)
  • 445 (CIFS)
  • 1433, 1434 (MSSQL)
  • 3000 (Go, Node.js, and Ruby web development frameworks)
  • 3306 (mySQL)
  • 3389 (RDP)
  • 4333 (ahsp)
  • 5000 (Python web development frameworks)
  • 5432 (postgresql)
  • 5500 (fcp-addr-srvr1)
  • 5601 (OpenSearch Dashboards)
  • 8080 (proxy)
  • 8088 (legacy HTTP port)
  • 8888 (alternative HTTP port)
  • 9200 or 9300 (OpenSearch)

Recommendation: Security groups should not allow unrestricted access to ports with high risk

To remediate this finding, complete the following steps:

AWS Console

To delete a security group rule:

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. In the navigation pane, choose Security Groups.
  3. Select the security group to update, choose Actions, and then choose Edit inbound rules to remove an inbound rule or Edit outbound rules to remove an outbound rule.
  4. Choose the Delete button to the right of the rule to delete.
  5. Choose Preview changes, Confirm.

For information on how to delete rules from a security group, see Delete rules from a security group in the Amazon EC2 User Guide for Linux Instances.

Learn about this finding type's supported assets and scan settings.

Restricted Ssh

Category name in the API: RESTRICTED_SSH

Security groups provide stateful filtering of ingress and egress network traffic to AWS resources.

CIS recommends that no security group allow unrestricted ingress access to port 22. Removing unfettered connectivity to remote console services, such as SSH, reduces a server's exposure to risk.

Recommendation: Security groups should not allow ingress from 0.0.0.0/0 to port 22

To remediate this finding, complete the following steps:

AWS Console

Perform the following steps for each security group associated with a VPC.

Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.

  1. In the left pane, choose Security groups.
  2. Select a security group.
  3. In the bottom section of the page, choose the Inbound Rules tab.
  4. Choose Edit rules.
  5. Identify the rule that allows access through port 22 and then choose the X to remove it.
  6. Choose Save rules.

Learn about this finding type's supported assets and scan settings.

Rotation Customer Created Cmks Enabled

Category name in the API: ROTATION_CUSTOMER_CREATED_CMKS_ENABLED

Checks if automatic key rotation is enabled for each key and matches to the key ID of the customer created AWS KMS key. The rule is NON_COMPLIANT if the AWS Config recorder role for a resource does not have the kms:DescribeKey permission.

Recommendation: Ensure rotation for customer created CMKs is enabled

Learn about this finding type's supported assets and scan settings.

Rotation Customer Created Symmetric Cmks Enabled

Category name in the API: ROTATION_CUSTOMER_CREATED_SYMMETRIC_CMKS_ENABLED

AWS Key Management Service (KMS) allows customers to rotate the backing key which is key material stored within the KMS which is tied to the key ID of the Customer Created customer master key (CMK). It is the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It is recommended that CMK key rotation be enabled for symmetric keys. Key rotation can not be enabled for any asymmetric CMK.

Recommendation: Ensure rotation for customer created symmetric CMKs is enabled

To remediate this finding, complete the following steps:

AWS console

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam.
  2. In the left navigation pane, choose Customer managed keys .
  3. Select a customer managed CMK where Key spec = SYMMETRIC_DEFAULT
  4. Underneath the "General configuration" panel open the tab "Key rotation"
  5. Check the "Automatically rotate this KMS key every year." checkbox

AWS CLI

  1. Run the following command to enable key rotation:
 aws kms enable-key-rotation --key-id <kms_key_id>

Learn about this finding type's supported assets and scan settings.

Routing Tables Vpc Peering Are Least Access

Category name in the API: ROUTING_TABLES_VPC_PEERING_ARE_LEAST_ACCESS

Checks if route tables for VPC peering are configure with the principal of least privileged.

Recommendation: Ensure routing tables for VPC peering are "least access"

Learn about this finding type's supported assets and scan settings.

S3 Account Level Public Access Blocks

Category name in the API: S3_ACCOUNT_LEVEL_PUBLIC_ACCESS_BLOCKS

Amazon S3 Block Public Access provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects do not allow public access.

Recommendation: Checks if the required S3 public access block settings are configured from account level

To remediate this finding, complete the following steps:

Terraform

The following Terraform resource configures account level access to S3.

resource "aws_s3_account_public_access_block" "s3_control" {
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

AWS Console

To edit block public access settings for all the S3 buckets in an AWS account.

  1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Choose Block Public Access settings for this account.
  3. Choose Edit to change the block public access settings for all the buckets in your AWS account.
  4. Choose the settings that you want to change, and then choose Save changes.
  5. When you're asked for confirmation, enter confirm. Then choose Confirm to save your changes.

AWS CLI

aws s3control put-public-access-block \
--account-id <value> \
--public-access-block-configuration '{"BlockPublicAcls": true, "BlockPublicPolicy": true, "IgnorePublicAcls": true, "RestrictPublicBuckets": true}'

Learn about this finding type's supported assets and scan settings.

S3 Bucket Logging Enabled

Category name in the API: S3_BUCKET_LOGGING_ENABLED

AWS S3 Server Access Logging feature records access requests to storage buckets which is useful for security audits. By default, server access logging is not enabled for S3 buckets.

Recommendation: Checks if logging is enabled on all S3 buckets

To remediate this finding, complete the following steps:

Terraform

The following example demonstrates how to create 2 buckets:

  1. A logging bucket
  2. A compliant bucket
variable "bucket_acl_map" {
  type = map(any)
  default = {
    "logging-bucket"   = "log-delivery-write"
    "compliant-bucket" = "private"
  }
}

resource "aws_s3_bucket" "all" {
  for_each            = var.bucket_acl_map
  bucket              = each.key
  object_lock_enabled = true
  tags = {
    "Pwd"    = "s3"
  }
}

resource "aws_s3_bucket_acl" "private" {
  for_each = var.bucket_acl_map
  bucket   = each.key
  acl      = each.value
}

resource "aws_s3_bucket_versioning" "enabled" {
  for_each = var.bucket_acl_map
  bucket   = each.key
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_logging" "enabled" {
  for_each      = var.bucket_acl_map
  bucket        = each.key
  target_bucket = aws_s3_bucket.all["logging-bucket"].id
  target_prefix = "log/"
}

resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  for_each = var.bucket_acl_map
  bucket   = each.key

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm     = "aws:kms"
    }
  }
}

AWS Console

For information on how to enabled S3 access logging via the AWS console see Enabling Amazon S3 server access logging in the AWS documentation.

AWS CLI

The following example demonstrates how to:

  1. Create a bucket policy to grant the logging service principal permission to PutObject in your logging bucket.

policy.json javascript { "Version": "2012-10-17", "Statement": [ { "Sid": "S3ServerAccessLogsPolicy", "Effect": "Allow", "Principal": {"Service": "logging.s3.amazonaws.com"}, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::MyBucket/Logs/*", "Condition": { "ArnLike": {"aws:SourceARN": "arn:aws:s3:::SOURCE-BUCKET-NAME"}, "StringEquals": {"aws:SourceAccount": "SOURCE-AWS-ACCOUNT-ID"} } } ] }

aws s3api put-bucket-policy \
  --bucket my-bucket
  --policy file://policy.json
  1. Apply the policy to your logging bucket

logging.json javascript { "LoggingEnabled": { "TargetBucket": "MyBucket", "TargetPrefix": "Logs/" } }

aws s3api put-bucket-logging \
  --bucket MyBucket \
  --bucket-logging-status file://logging.json

Learn about this finding type's supported assets and scan settings.

S3 Bucket Policy Set Deny Http Requests

Category name in the API: S3_BUCKET_POLICY_SET_DENY_HTTP_REQUESTS

At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS.

Recommendation: Ensure S3 Bucket Policy is set to deny HTTP requests

To remediate this finding, complete the following steps:

AWS console

  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/
  2. Select the Check box next to the Bucket.
  3. Click on 'Permissions'.
  4. Click 'Bucket Policy'
  5. Add this to the existing policy filling in the required information { "Sid": <optional>", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::<bucket_name>/*", "Condition": { "Bool": { "aws:SecureTransport": "false" } } }
  6. Save
  7. Repeat for all the buckets in your AWS account that contain sensitive data.

From Console

using AWS Policy Generator:

  1. Repeat steps 1-4 above.
  2. Click on Policy Generator at the bottom of the Bucket Policy Editor
  3. Select Policy Type S3 Bucket Policy
  4. Add Statements
  5. Effect = Deny
  6. Principal = *
  7. AWS Service = Amazon S3
  8. Actions = *
  9. Amazon Resource Name =
  10. Generate Policy
  11. Copy the text and add it to the Bucket Policy.

AWS CLI

  1. Export the bucket policy to a json file. aws s3api get-bucket-policy --bucket <bucket_name> --query Policy --output text > policy.json

  2. Modify the policy.json file by adding in this statement:

{
 "Sid": <optional>",
 "Effect": "Deny",
 "Principal": "*",
 "Action": "s3:*",
 "Resource": "arn:aws:s3:::<bucket_name>/*",
 "Condition": {
 "Bool": {
 "aws:SecureTransport": "false"
 }
 }
 }
  1. Apply this modified policy back to the S3 bucket:
aws s3api put-bucket-policy --bucket <bucket_name> --policy file://policy.json

Learn about this finding type's supported assets and scan settings.

S3 Bucket Replication Enabled

Category name in the API: S3_BUCKET_REPLICATION_ENABLED

This control checks whether an Amazon S3 bucket has Cross-Region Replication enabled. The control fails if the bucket doesn't have Cross-Region Replication enabled or if Same-Region Replication is also enabled.

Replication is the automatic, asynchronous copying of objects across buckets in the same or different AWS Regions. Replication copies newly created objects and object updates from a source bucket to a destination bucket or buckets. AWS best practices recommend replication for source and destination buckets that are owned by the same AWS account. In addition to availability, you should consider other systems hardening settings.

Recommendation: Checks whether S3 buckets have cross-region replication enabled

Learn about this finding type's supported assets and scan settings.

S3 Bucket Server Side Encryption Enabled

Category name in the API: S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED

This checks that your S3 bucket either has Amazon S3 default encryption enabled or that the S3 bucket policy explicitly denies put-object requests without server-side encryption.

Recommendation: Ensure all S3 buckets employ encryption-at-rest

To remediate this finding, complete the following steps:

Terraform

resource "aws_s3_bucket_server_side_encryption_configuration" "enable" {
  bucket = "my-bucket"

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

AWS Console

To enable default encryption on an S3 bucket

  1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. In the left navigation pane, choose Buckets.
  3. Choose the S3 bucket from the list.
  4. Choose Properties.
  5. Choose Default encryption.
  6. For the encryption, choose either AES-256 or AWS-KMS.
  7. Choose AES-256 to use keys that are managed by Amazon S3 for default encryption. For more information about using Amazon S3 server-side encryption to encrypt your data, see the Amazon Simple Storage Service User Guide.
  8. Choose AWS-KMS to use keys that are managed by AWS KMS for default encryption. Then choose a master key from the list of the AWS KMS master keys that you have created.
  9. Type the Amazon Resource Name (ARN) of the AWS KMS key to use. You can find the ARN for your AWS KMS key in the IAM console, under Encryption keys. Or, you can choose a key name from the drop-down list.
  10. Important: if you use the AWS KMS option for your default encryption configuration, you are subject to the RPS (requests per second) quotas of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see the AWS Key Management Service Developer Guide.
  11. Choose Save.

For more information about creating an AWS KMS key, see the AWS Key Management Service Developer Guide.

For more information about using AWS KMS with Amazon S3, see the Amazon Simple Storage Service User Guide.

When enabling default encryption, you might need to update your bucket policy. For more information about moving from bucket policies to default encryption, see the Amazon Simple Storage Service User Guide.

AWS CLI

aws s3api put-bucket-encryption \
  --bucket my-bucket \
  --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'

Learn about this finding type's supported assets and scan settings.

S3 Bucket Versioning Enabled

Category name in the API: S3_BUCKET_VERSIONING_ENABLED

Amazon S3 is a means of keeping multiple variants of an object in the same bucket and can help you to recover more easily from both unintended user actions and application failures.

Recommendation: Checks that versioning is enabled for all S3 buckets

To remediate this finding, complete the following steps:

Terraform

resource "aws_s3_bucket" "my_bucket" {
  bucket = "my-bucket"

  versioning {
    enabled = true
  }
}

AWS Console

To enable or disable versioning on an S3 bucket

  1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. In the Buckets list, choose the name of the bucket that you want to enable versioning for.
  3. Choose Properties.
  4. Under Bucket Versioning, choose Edit.
  5. Choose Suspend or Enable, and then choose Save changes.

AWS CLI

aws s3control put-bucket-versioning \
--bucket <bucket_name> \
--versioning-configuration Status=Enabled

Learn about this finding type's supported assets and scan settings.

S3 Default Encryption Kms

Category name in the API: S3_DEFAULT_ENCRYPTION_KMS

Checks whether the Amazon S3 buckets are encrypted with AWS Key Management Service (AWS KMS)

Recommendation: Checks that all buckets are encrypted with KMS

To remediate this finding, complete the following steps:

Terraform

resource "aws_kms_key" "s3_encryption" {
  description         = "Used for S3 Bucket encryption configuration"
  enable_key_rotation = true
}

resource "aws_s3_bucket_server_side_encryption_configuration" "enable" {
  bucket   = "my-bucket"

  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.s3_encryption.arn
      sse_algorithm     = "aws:kms"
    }
  }
}

AWS Console

To enable default encryption on an S3 bucket

  1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. In the left navigation pane, choose Buckets.
  3. Choose the S3 bucket from the list.
  4. Choose Properties.
  5. Choose Default encryption.
  6. For the encryption, choose AWS-KMS.
  7. Choose AWS-KMS to use keys that are managed by AWS KMS for default encryption. Then choose a master key from the list of the AWS KMS master keys that you have created. For more information on how to create KMS keys, see the AWS Documentation - Creating Keys
  8. Type the Amazon Resource Name (ARN) of the AWS KMS key to use. You can find the ARN for your AWS KMS key in the IAM console, under Encryption keys. Or, you can choose a key name from the drop-down list.
  9. Important: this solution is subject to the RPS (requests per second) quotas of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see the AWS Key Management Service Developer Guide.
  10. Choose Save.

For more information about using AWS KMS with Amazon S3, see the Amazon Simple Storage Service User Guide.

When enabling default encryption, you might need to update your bucket policy. For more information about moving from bucket policies to default encryption, see the Amazon Simple Storage Service User Guide.

AWS CLI

Create a KMS key

aws kms create-key \
  --description "Key to encrypt S3 buckets"

Enable key rotation

aws kms enable-key-rotation \
  --key-id <key_id_from_previous_command>

Update bucket

aws s3api put-bucket-encryption \
  --bucket my-bucket \
  --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"KMSMasterKeyID": "<id_from_key>", "SSEAlgorithm": "AES256"}}]}'

Learn about this finding type's supported assets and scan settings.

Sagemaker Notebook Instance Kms Key Configured

Category name in the API: SAGEMAKER_NOTEBOOK_INSTANCE_KMS_KEY_CONFIGURED

Checks if an AWS Key Management Service (AWS KMS) key is configured for an Amazon SageMaker notebook instance. The rule is NON_COMPLIANT if 'KmsKeyId' is not specified for the SageMaker notebook instance.

Recommendation: Checks that all SageMaker notebook instances are configured to use KMS

Learn about this finding type's supported assets and scan settings.

Sagemaker Notebook No Direct Internet Access

Category name in the API: SAGEMAKER_NOTEBOOK_NO_DIRECT_INTERNET_ACCESS

Checks whether direct internet access is disabled for an SageMaker notebook instance. To do this, it checks whether the DirectInternetAccess field is disabled for the notebook instance.

If you configure your SageMaker instance without a VPC, then by default direct internet access is enabled on your instance. You should configure your instance with a VPC and change the default setting to Disable—Access the internet through a VPC.

To train or host models from a notebook, you need internet access. To enable internet access, make sure that your VPC has a NAT gateway and your security group allows outbound connections. To learn more about how to connect a notebook instance to resources in a VPC, see Connect a notebook instance to resources in a VPC in the Amazon SageMaker Developer Guide.

You should also ensure that access to your SageMaker configuration is limited to only authorized users. Restrict users' IAM permissions to modify SageMaker settings and resources.

Recommendation: Checks whether direct internet access is disabled for all Amazon SageMaker notebook instance

To remediate this finding, complete the following steps:

AWS Console

Note that you cannot change the internet access setting after a notebook instance is created. It must be stopped, deleted, and recreated.

To configure an SageMaker notebook instance to deny direct internet access:

  1. Open the SageMaker console at https://console.aws.amazon.com/sagemaker/
  2. Navigate to Notebook instances.
  3. Delete the instance that has direct internet access enabled. Choose the instance, choose Actions, then choose stop.
  4. After the instance is stopped, choose Actions, then choose delete.
  5. Choose Create notebook instance. Provide the configuration details.
  6. Expand the network section, then choose a VPC, subnet, and security group. Under Direct internet access, choose Disable—Access the internet through a VPC.
  7. Choose Create notebook instance.

For more information, see Connect a notebook instance to resources in a VPC in the Amazon SageMaker Developer Guide.

Learn about this finding type's supported assets and scan settings.

Secretsmanager Rotation Enabled Check

Category name in the API: SECRETSMANAGER_ROTATION_ENABLED_CHECK

Checks whether a secret stored in AWS Secrets Manager is configured with automatic rotation. The control fails if the secret isn't configured with automatic rotation. If you provide a custom value for the maximumAllowedRotationFrequency parameter, the control passes only if the secret is automatically rotated within the specified window of time.

Secrets Manager helps you improve the security posture of your organization. Secrets include database credentials, passwords, and third-party API keys. You can use Secrets Manager to store secrets centrally, encrypt secrets automatically, control access to secrets, and rotate secrets safely and automatically.

Secrets Manager can rotate secrets. You can use rotation to replace long-term secrets with short-term ones. Rotating your secrets limits how long an unauthorized user can use a compromised secret. For this reason, you should rotate your secrets frequently. To learn more about rotation, see Rotating your AWS Secrets Manager secrets in the AWS Secrets Manager User Guide.

Recommendation: Checks that all AWS Secrets Manager secrets have rotation enabled

Learn about this finding type's supported assets and scan settings.

Sns Encrypted Kms

Category name in the API: SNS_ENCRYPTED_KMS

Checks whether an SNS topic is encrypted at rest using AWS KMS. The controls fails if an SNS topic doesn't use a KMS key for server-side encryption (SSE).

Encrypting data at rest reduces the risk of data stored on disk being accessed by a user not authenticated to AWS. It also adds another set of access controls to limit the ability of unauthorized users to access the data. For example, API permissions are required to decrypt the data before it can be read. SNS topics should be encrypted at-rest for an added layer of security.

Recommendation: Checks that all SNS topics are encrypted with KMS

Learn about this finding type's supported assets and scan settings.

Vpc Default Security Group Closed

Category name in the API: VPC_DEFAULT_SECURITY_GROUP_CLOSED

This control checks whether the default security group of a VPC allows inbound or outbound traffic. The control fails if the security group allows inbound or outbound traffic.

The rules for the default security group allow all outbound and inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. We recommend that you don't use the default security group. Because the default security group cannot be deleted, you should change the default security group rules setting to restrict inbound and outbound traffic. This prevents unintended traffic if the default security group is accidentally configured for resources such as EC2 instances.

Recommendation: Ensure the default security group of every VPC restricts all traffic

Learn about this finding type's supported assets and scan settings.

Vpc Flow Logging Enabled All Vpcs

Category name in the API: VPC_FLOW_LOGGING_ENABLED_ALL_VPCS

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet "Rejects" for VPCs.

Recommendation: Ensure VPC flow logging is enabled in all VPCs

To remediate this finding, complete the following steps:

AWS console

  1. Sign into the management console
  2. Select Services then VPC
  3. In the left navigation pane, select Your VPCs
  4. Select a VPC
  5. In the right pane, select the Flow Logs tab.
  6. If no Flow Log exists, click Create Flow Log
  7. For Filter, select Reject
  8. Enter in a Role and Destination Log Group
  9. Click Create Log Flow
  10. Click on CloudWatch Logs Group

Note: Setting the filter to "Reject" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to "All" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment.

AWS CLI

  1. Create a policy document and name it as role_policy_document.json and paste the following content:
{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Sid": "test",
 "Effect": "Allow",
 "Principal": {
 "Service": "ec2.amazonaws.com"
 },
 "Action": "sts:AssumeRole"
 }
 ]
}
  1. Create another policy document and name it as iam_policy.json and paste the following content:
{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action":[
 "logs:CreateLogGroup",
 "logs:CreateLogStream",
 "logs:DescribeLogGroups",
 "logs:DescribeLogStreams",
 "logs:PutLogEvents",
 "logs:GetLogEvents",
 "logs:FilterLogEvents"
 ],
 "Resource": "*"
 }
 ]
}
  1. Run the below command to create an IAM role:
aws iam create-role --role-name <aws_support_iam_role> --assume-role-policy-document file://<file-path>role_policy_document.json
  1. Run the below command to create an IAM policy:
aws iam create-policy --policy-name <ami-policy-name> --policy-document file://<file-path>iam-policy.json
  1. Run attach-group-policy command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned):
aws iam attach-group-policy --policy-arn arn:aws:iam::<aws-account-id>:policy/<iam-policy-name> --group-name <group-name>
  1. Run describe-vpcs to get the VpcId available in the selected region:
aws ec2 describe-vpcs --region <region>
  1. The command output should return the VPC Id available in the selected region.
  2. Run create-flow-logs to create a flow log for the vpc:
aws ec2 create-flow-logs --resource-type VPC --resource-ids <vpc-id> --traffic-type REJECT --log-group-name <log-group-name> --deliver-logs-permission-arn <iam-role-arn>
  1. Repeat step 8 for other vpcs available in the selected region.
  2. Change the region by updating --region and repeat remediation procedure for other vpcs.

Learn about this finding type's supported assets and scan settings.

Vpc Sg Open Only To Authorized Ports

Category name in the API: VPC_SG_OPEN_ONLY_TO_AUTHORIZED_PORTS

This control checks whether an Amazon EC2 security group permits unrestricted incoming traffic from unauthorized ports. The control status is determined as follows:

If you use the default value for authorizedTcpPorts, the control fails if the security group permits unrestricted incoming traffic from any port other than ports 80 and 443.

If you provide custom values for authorizedTcpPorts or authorizedUdpPorts, the control fails if the security group permits unrestricted incoming traffic from any unlisted port.

If no parameter is used, the control fails for any security group that has an unrestricted inbound traffic rule.

Security groups provide stateful filtering of ingress and egress network traffic to AWS. Security group rules should follow the principal of least privileged access. Unrestricted access (IP address with a /0 suffix) increases the opportunity for malicious activity such as hacking, denial-of-service attacks, and loss of data. Unless a port is specifically allowed, the port should deny unrestricted access.

Recommendation: Checks that any security group with 0.0.0.0/0 of any VPC allows only specific inbound TCP/UDP traffic

Learn about this finding type's supported assets and scan settings.

Both VPC VPN Tunnels Up

Category name in the API: VPC_VPN_2_TUNNELS_UP

A VPN tunnel is an encrypted link where data can pass from the customer network to or from AWS within an AWS Site-to-Site VPN connection. Each VPN connection includes two VPN tunnels which you can simultaneously use for high availability. Ensuring that both VPN tunnels are up for a VPN connection is important for confirming a secure and highly available connection between an AWS VPC and your remote network.

This control checks that both VPN tunnels provided by AWS Site-to-Site VPN are in UP status. The control fails if one or both tunnels are in DOWN status.

Recommendation: Checks that both AWS VPN tunnels provided by AWS site-to-site are in UP status

Learn about this finding type's supported assets and scan settings.