This page provides a list of reference guides and techniques for remediating Security Health Analytics findings using Security Command Center.
You need adequate Identity and Access Management (IAM) roles to view or edit findings, and to access or modify Google Cloud resources. If you encounter permissions errors when accessing Security Command Center in the Google Cloud console, ask your administrator for assistance and, to learn about roles, see Access control. To resolve resource errors, read documentation for affected products.
Security Health Analytics remediation
This section includes remediation instructions for all Security Health Analytics findings.
For finding types that are mapped to CIS Benchmarks, the remediation guidance comes from Center for Internet Security (CIS), unless otherwise stated. For more information, see Detectors and compliance.
Deactivation of findings after remediation
After you remediate a vulnerability or misconfiguration finding,
Security Health Analytics automatically sets the state of the finding to INACTIVE
the next time it scans for the finding.
How long Security Health Analytics takes to set a remediated finding to INACTIVE
depends on when the finding is fixed and the schedule of the scan that
detects the finding.
Security Health Analytics also sets the state of a finding to INACTIVE
when a scan detects that the resource that is affected by the finding
is deleted. If you want to remove a finding for a deleted resource
from your display while you are waiting for Security Health Analytics to detect that
the resource is deleted, you can mute the finding. To mute a finding, see
Mute findings in Security Command Center.
Do not use mute to hide remediated findings for existing resources.
If the issue recurs and Security Health Analytics restores the ACTIVE
state of the finding, you might not see the reactivated finding, because
muted findings are excluded from any finding query that specifies NOT mute="MUTED"
, such as the default finding query.
For information about scan intervals, see Security Health Analytics scan types.
Access Transparency disabled
Category name in the API: ACCESS_TRANSPARENCY_DISABLED
Access Transparency logs when Google Cloud employees access the projects in your organization to provide support. Enable Access Transparency to log who from Google Cloud is accessing your information, when, and why. For more information, see Access Transparency.
To enable Access Transparency on a project, the project must be associated with a billing account.
Required roles
To get the permissions that you need to perform this task, ask your
administrator to grant you the Access Transparency Admin (roles/axt.admin
)
IAM role at the organization level. For more information about
granting roles, see
Manage access.
This predefined role contains the permissions axt.labels.get
and
axt.labels.set
, which are required to perform this task. You might also be
able to get these permissions with a
custom role or other
predefined roles.
Remediation steps
To remediate this finding, complete the following steps:
Check your organization-level permissions:
Go to the Identity and Access Management page on the Google Cloud console.
If you're prompted, select the Google Cloud organization in the selector menu.
Select any Google Cloud project within the organization using the selector menu.
Access Transparency is configured on a Google Cloud project page but Access Transparency is enabled for the entire organization.
Go to the IAM & Admin > Settings page.
Click Enable Access Transparency.
supported assets and scan settings.
Learn about this finding type'sAlloyDB auto backup disabled
Category name in the API: ALLOYDB_AUTO_BACKUP_DISABLED
An AlloyDB for PostgreSQL cluster doesn't have automatic backups enabled.
To help prevent data loss, turn on automated backups for your cluster. For more information, see Configure additional automated backups.
To remediate this finding, complete the following steps:
Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.
Click a cluster in the Resource Name column.
Click Data protection.
Under the Automated backup policy section, click Edit in the Automated backups row.
Select the Automate backups checkbox.
Click Update.
supported assets and scan settings.
Learn about this finding type'sAlloyDB backups disabled
Category name in the API: ALLOYDB_BACKUPS_DISABLED
An AlloyDB for PostgreSQL cluster has neither automatic nor continuous backups enabled.
To help prevent data loss, turn on either automated or continuous backups for your cluster. For more information, see Configure additional backups.
To remediate this finding, complete the following steps:
Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.
In the Resource Name column, click the name of the cluster that is identified in the finding.
Click Data protection.
Set up a backup policy.
supported assets and scan settings.
Learn about this finding type'sAlloyDB CMEK disabled
Category name in the API: ALLOYDB_CMEK_DISABLED
An AlloyDB cluster is not using customer-managed encryption keys (CMEK).
With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google uses to encrypt your data, giving you more control over access to your data. For more information, see About CMEK. CMEK incurs additional costs related to Cloud KMS.
To remediate this finding, complete the following steps:
Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.
In the Resource Name column, click the name of the cluster that is identified in the finding.
Click Create Backup. Set a backup ID.
Click Create.
Under the Backup/Restore section, click Restore next to the Backup ID entry you chose.
Set a new cluster ID and network.
Click Advanced Encryption Options. Select the CMEK that you want to encrypt the new cluster with.
Click Restore.
supported assets and scan settings.
Learn about this finding type'sAlloyDB log min error statement severity
Category name in the API: ALLOYDB_LOG_MIN_ERROR_STATEMENT_SEVERITY
An AlloyDB for PostgreSQL instance does not have the log_min_error_statement
database flag set to error
or another recommended value.
The log_min_error_statement
flag controls whether SQL statements that cause
error conditions are recorded in server logs. SQL statements of the specified
severity or higher are logged. The higher the severity, the fewer messages that
are recorded. If set to a severity level that is too high, error messages might
not be logged.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.
Click the cluster in the Resource Name column.
Under the Instances in your cluster section, click Edit for the instance.
Click Advanced Configuration Options.
Under the Flags section, set the
log_min_error_statement
database flag with one of the following recommended values, according to your organization's logging policy.debug5
debug4
debug3
debug2
debug1
info
notice
warning
error
Click Update Instance.
supported assets and scan settings.
Learn about this finding type'sAlloyDB log min messages
Category name in the API: ALLOYDB_LOG_MIN_MESSAGES
An AlloyDB for PostgreSQL instance does not have the log_min_messages
database flag set to at minimum warning
.
The log_min_messages
flag controls which message levels are recorded in
server logs. The higher the severity, the fewer messages are recorded. Setting
the threshold too low can result in increased log storage size and length,
making it difficult to find actual errors.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.
Click the cluster in the Resource Name column.
Under the Instances in your cluster section, click Edit for the instance.
Click Advanced Configuration Options.
Under the Flags section, set the
log_min_messages
database flag with one of the following recommended values, according to your organization's logging policy.debug5
debug4
debug3
debug2
debug1
info
notice
warning
Click Update Instance.
supported assets and scan settings.
Learn about this finding type'sAlloyDB log error verbosity
Category name in the API: ALLOYDB_LOG_ERROR_VERBOSITY
An AlloyDB for PostgreSQL instance does not have the log_error_verbosity
database flag set to default
or another less restrictive value.
The log_error_verbosity
flag controls the amount of detail in messages
logged. The greater the verbosity, the more details are recorded in messages.
We recommend setting this flag to default
or another less restrictive value.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.
Click the cluster in the Resource Name column.
Under the Instances in your cluster section, click Edit for the instance.
Click Advanced Configuration Options.
Under the Flags section, set the
log_error_verbosity
database flag with one of the following recommended values, according to your organization's logging policy.default
verbose
Click Update Instance.
supported assets and scan settings.
Learn about this finding type'sAlloyDB Public IP
Category name in the API: ALLOYDB_PUBLIC_IP
An AlloyDB for PostgreSQL database instance has a public IP address.
To reduce your organization's attack surface, use private instead of public IP addresses. Private IP addresses provide improved network security and lower latency for your application.
To remediate this finding, complete the following steps:
Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.
In the Resource Name column, click the name of the cluster that is identified in the finding.
Under the Instances in your cluster section, click Edit for the instance.
Under the Connectivity section, uncheck the box for Enable Public IP.
Click Update Instance.
supported assets and scan settings.
Learn about this finding type'sAlloyDB SSL not enforced
Category name in the API: ALLOYDB_SSL_NOT_ENFORCED
An AlloyDB for PostgreSQL database instance doesn't require all incoming connections to use SSL.
To avoid leaking sensitive data in transit through unencrypted communications, all incoming connections to your AlloyDB database instance should use SSL. Learn more about Configuring SSL/TLS.
To remediate this finding, complete the following steps:
Go to the AlloyDB for PostgreSQL clusters page in the Google Cloud console.
In the Resource Name column, click the name of the cluster that is identified in the finding.
Under the Instances in your cluster section, click Edit for the instance.
Under the Network Security section, click the box for Require SSL Encryption.
Click Update Instance.
supported assets and scan settings.
Learn about this finding type'sAdmin service account
Category name in the API: ADMIN_SERVICE_ACCOUNT
A service account in your organization or project has Admin, Owner, or Editor privileges assigned to it. These roles have broad permissions and shouldn't be assigned to service accounts. To learn about service accounts and the roles available to them, see Service accounts.
To remediate this finding, complete the following steps:
Go to the IAM policy page in the Google Cloud console.
For each principal identified in the finding:
- Click Edit principal next to the principal.
- To remove permissions, click Delete role next to the role.
- Click Save.
supported assets and scan settings.
Learn about this finding type'sAlpha cluster enabled
Category name in the API: ALPHA_CLUSTER_ENABLED
Alpha cluster features are enabled for a Google Kubernetes Engine (GKE) cluster.
Alpha clusters let early adopters experiment with workloads that use new features before they're released to the general public. Alpha clusters have all GKE API features enabled, but aren't covered by the GKE SLA, don't receive security updates, have node auto-upgrade and node auto-repair disabled, and can't be upgraded. They're also automatically deleted after 30 days.
To remediate this finding, complete the following steps:
Alpha clusters can't be disabled. You must create a new cluster with alpha features disabled.
Go to the Kubernetes clusters page in the Google Cloud console.
Click Create.
Select Configure next to the type of cluster you want to create.
Under the Features tab, ensure Enable Kubernetes alpha features in this cluster is disabled.
Click Create.
To move workloads to the new cluster, see Migrating workloads to different machine types.
To delete the original cluster, see Deleting a cluster.
supported assets and scan settings.
Learn about this finding type'sAPI key APIs unrestricted
Category name in the API: API_KEY_APIS_UNRESTRICTED
There are API keys being used too broadly.
Unrestricted API keys are insecure because they can be retrieved from devices on which the key is stored or can be seen publicly, for instance, from within a browser. In accordance with the principle of least privilege, configure API keys to only call APIs required by the application. For more information, see Apply API key restrictions.
To remediate this finding, complete the following steps:
Go to the API keys page in the Google Cloud console.
For each API key:
- In the API keys section, on the row for each API key for which you need to restrict APIs, click Actions.
- From the Actions menu, click Edit API key. The Edit API key page opens.
- In the API restrictions section, select Restrict APIs. The Select APIs drop-down menu appears.
- On the Select APIs drop-down list, select which APIs to allow.
- Click Save. It might take up to five minutes for settings to take effect.
supported assets and scan settings.
Learn about this finding type'sAPI key apps unrestricted
Category name in the API: API_KEY_APPS_UNRESTRICTED
There are API keys being used in an unrestricted way, allowing use by any untrusted app.
Unrestricted API keys are insecure because they can be retrieved on devices on which the key is stored or can be seen publicly, for instance, from within a browser. In accordance with the principle of least privilege, restrict API key usage to trusted hosts, HTTP referrers, and apps. For more information, see Apply API key restrictions.
To remediate this finding, complete the following steps:
Go to the API keys page in the Google Cloud console.
For each API key:
- In the API keys section, on the row for each API key for which you need to restrict applications, click Actions.
- From the Actions menu, click Edit API key. The Edit API key page opens.
- On the Edit API key page, under Application restrictions, select a restriction category. You can set one application restriction per key.
- In the Add an item field that appears when you select a restriction, click Add an item to add restrictions based on the needs of your application.
- Once finished adding items, click Done.
- Click Save.
supported assets and scan settings.
Learn about this finding type'sAPI key exists
Category name in the API: API_KEY_EXISTS
A project is using API keys instead of standard authentication.
API keys are less secure than other authentication methods because they are simple encrypted strings and easy for others to discover and use. They can be retrieved on devices on which the key is stored or can be seen publicly, for instance, from within a browser. Also, API keys do not uniquely identify users or applications making requests. As an alternative, you can use a standard authentication flow, with either service accounts or user accounts.
To remediate this finding, complete the following steps:
- Ensure your applications are configured with an alternate form of authentication.
Go to the API credentials page in the Google Cloud console.
In the API keys section on the row for each API key that you need to delete, click
Actions.From the Actions menu, click Delete API key.
supported assets and scan settings.
Learn about this finding type'sAPI key not rotated
Category name in the API: API_KEY_NOT_ROTATED
An API key hasn't been rotated for more than 90 days.
API keys do not expire, so if one is stolen, it might be used indefinitely unless the project owner revokes or rotates the key. Regenerating API keys frequently reduces the amount of time that a stolen API key can be used to access data on a compromised or terminated account. Rotate API keys at least every 90 days. For more information, see Best practices for managing API keys.
To remediate this finding, complete the following steps:
Go to the API keys page in the Google Cloud console.
For each API key:
- In the API keys section, on the row for each API key that you need to rotate, click Actions.
- From the Actions menu, click Edit API key. The Edit API key page opens.
- On the Edit API key page, if the date in the Creation date field is older than 90 days, click Regenerate key. A new key is generated.
- Click Save.
- To ensure your applications continue working uninterrupted, update them to use the new API key. The old API key works for 24 hours before it is permanently deactivated.
supported assets and scan settings.
Learn about this finding type'sAudit config not monitored
Category name in the API: AUDIT_CONFIG_NOT_MONITORED
Log metrics and alerts aren't configured to monitor audit configuration changes.
Cloud Logging produces Admin Activity and Data Access logs that enable security analysis, resource change tracking, and compliance auditing. By monitoring audit configuration changes, you ensure that all activities in your project can be audited at any time. For more information, see Overview of logs-based metrics.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, create metrics, if necessary, and alert policies:
Create metric
Go to the Logs-based Metrics page in the Google Cloud console.
Click Create Metric.
Under Metric Type, select Counter.
Under Details:
- Set a Log metric name.
- Add a description.
- Set Units to 1.
Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:
protoPayload.methodName="SetIamPolicy" AND protoPayload.serviceData.policyDelta.auditConfigDeltas:*
Click Create Metric. You see a confirmation.
Create Alert Policy
-
In the Google Cloud console, go to the Log-based Metrics page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Under the User-defined metrics section, select the metric you created in the previous section.
-
Click More
, and then click Create alert from metric.The New condition dialog opens with the metric and data transformation options pre-populated.
- Click Next.
- Review the pre-populated settings. You might want to modify the Threshold value.
- Click Condition name and enter a name for the condition.
- Click Next.
To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.
To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.
- Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: Click Documentation, and then add any information that you want included in a notification message.
- Click Alert name and enter a name for the alerting policy.
- Click Create Policy.
supported assets and scan settings.
Learn about this finding type'sAudit logging disabled
Category name in the API: AUDIT_LOGGING_DISABLED
This finding isn't available for project-level activations.
Audit logging is disabled for one or more Google Cloud services, or one or more principals are exempt from data access audit logging.
Enable Cloud Logging for all services to track all admin activities, read access, and write access to user data. Depending on the quantity of information, Cloud Logging costs can be significant. To understand your usage of the service and its cost, see Optimize cost: Cloud operations.
If any principals are exempted from data access audit logging on either the default data access audit logging configuration or the logging configurations for any individual services, remove the exemption.
To remediate this finding, complete the following steps:
Go to the Data Access audit logs default configuration page in the Google Cloud console.
On the Log types tab, activate data access audit logging in the the default configuration:
- Select Admin Read, Data Read, and Data Write.
- Click Save.
On the Exempted principals tab, remove all exempted users from the default configuration:
- Remove each listed principal by clicking Delete next to each name.
- Click Save.
Go to the Audit Logs page.
Remove any exempted principals from the data access audit log configurations of individual services.
- Under Data access audit logs configuration, for each service that shows an exempted principal, click on the service. An audit log configuration panel opens for the service.
- On the Exempted principals tab, remove all exempted principals by clicking Delete next to each name.
- Click Save.
supported assets and scan settings.
Learn about this finding type'sAuto backup disabled
Category name in the API: AUTO_BACKUP_DISABLED
A Cloud SQL database doesn't have automatic backups enabled.
To prevent data loss, turn on automated backups for your SQL instances. For more information, see Creating and managing on-demand and automatic backups.
To remediate this finding, complete the following steps:
In the Google Cloud console, go to the Cloud SQL Instances page.
Click the instance name.
Click Backups.
Next to Settings, click
Edit.Select the Automated daily backups checkbox.
Optional: In the Number of days box, enter how many days of backups you want to retain.
Optional: In the Backup window list, select the time window in which to take backups.
Click Save.
supported assets and scan settings.
Learn about this finding type'sAuto repair disabled
Category name in the API: AUTO_REPAIR_DISABLED
A Google Kubernetes Engine (GKE) cluster's auto repair feature, which keeps nodes in a healthy, running state, is disabled.
When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node. For more information, see Auto-repairing nodes.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
Click the Nodes tab.
For each node pool:
- Click the name of the node pool to go to its detail page.
- Click Edit.
- Under Management, select Enable auto-repair.
- Click Save.
supported assets and scan settings.
Learn about this finding type'sAuto upgrade disabled
Category name in the API: AUTO_UPGRADE_DISABLED
A GKE cluster's auto upgrade feature, which keeps clusters and node pools on the latest stable version of Kubernetes, is disabled.
For more information, see Auto-upgrading nodes.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
In the list of clusters, click the name of the cluster.
Click the Nodes tab.
For each node pool:
- Click the name of the node pool to go to its detail page.
- Click Edit.
- Under Management, select Enable auto-upgrade.
- Click Save.
supported assets and scan settings.
Learn about this finding type'sBigQuery table CMEK disabled
Category name in the API: BIGQUERY_TABLE_CMEK_DISABLED
A BigQuery table is not configured to use a customer-managed encryption key (CMEK).
With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google Cloud uses to encrypt your data, giving you more control over access to your data. For more information, see Protecting data with Cloud KMS keys.
To remediate this finding, complete the following steps:
- Create a table protected by Cloud Key Management Service.
- Copy your table to the new CMEK-enabled table.
- Delete the original table.
To set a default CMEK key that encrypts all new tables in a dataset, see Set a dataset default key.
supported assets and scan settings.
Learn about this finding type'sBinary authorization disabled
Category name in the API: BINARY_AUTHORIZATION_DISABLED
Binary Authorization is disabled on a GKE cluster.
Binary Authorization includes an optional feature that protects supply chain security by only allowing container images signed by trusted authorities during the development process to be deployed in the cluster. By enforcing signature-based deployment, you gain tighter control over your container environment, ensuring only verified images are allowed to be deployed.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
In the Security section, click
Edit in the Binary Authorization row.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
In the dialog, select Enable Binary Authorization.
Click Save changes.
Go to the Binary Authorization setup page.
Ensure a policy that requires attestors is configured and the project default rule is not configured to Allow all images. For more information, see Set up for GKE.
To ensure that images that violate the policy are allowed to be deployed and violations are logged to Cloud Audit Logs, you can enable dry-run mode.
supported assets and scan settings.
Learn about this finding type'sBucket CMEK disabled
Category name in the API: BUCKET_CMEK_DISABLED
A bucket is not encrypted with customer-managed encryption keys (CMEK).
Setting a default CMEK on a bucket gives you more control over access to your data. For more information, see Customer-managed encryption keys.
To remediate this finding, use CMEK with a bucket by following Using customer-managed encryption keys. CMEK incurs additional costs related to Cloud KMS.
supported assets and scan settings.
Learn about this finding type'sBucket IAM not monitored
Category name in the API: BUCKET_IAM_NOT_MONITORED
Log metrics and alerts aren't configured to monitor Cloud Storage IAM permission changes.
Monitoring changes to Cloud Storage bucket permissions helps you identify over-privileged users or suspicious activity. For more information, see Overview of logs-based metrics.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Create metric
Go to the Logs-based Metrics page in the Google Cloud console.
Click Create Metric.
Under Metric Type, select Counter.
Under Details:
- Set a Log metric name.
- Add a description.
- Set Units to 1.
Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:
resource.type=gcs_bucket AND protoPayload.methodName="storage.setIamPermissions"
Click Create Metric. You see a confirmation.
Create Alert Policy
-
In the Google Cloud console, go to the Log-based Metrics page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Under the User-defined metrics section, select the metric you created in the previous section.
-
Click More
, and then click Create alert from metric.The New condition dialog opens with the metric and data transformation options pre-populated.
- Click Next.
- Review the pre-populated settings. You might want to modify the Threshold value.
- Click Condition name and enter a name for the condition.
- Click Next.
To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.
To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.
- Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: Click Documentation, and then add any information that you want included in a notification message.
- Click Alert name and enter a name for the alerting policy.
- Click Create Policy.
supported assets and scan settings.
Learn about this finding type'sBucket logging disabled
Category name in the API: BUCKET_LOGGING_DISABLED
There is a storage bucket without logging enabled.
To help investigate security issues and monitor storage consumption, enable access logs and storage information for your Cloud Storage buckets. Access logs provide information for all requests made on a specified bucket, and the storage logs provide information about the storage consumption of that bucket.
To remediate this finding, set up logging for the bucket indicated by the Security Health Analytics finding by completing the usage logs & storage logs guide.
supported assets and scan settings.
Learn about this finding type'sBucket policy only disabled
Category name in the API: BUCKET_POLICY_ONLY_DISABLED
Uniform bucket-level access, previously called Bucket Policy Only, isn't configured.
Uniform bucket-level access simplifies bucket access control by disabling object-level permissions (ACLs). When enabled, only bucket-level IAM permissions grant access to the bucket and the objects it contains. For more information, see Uniform bucket-level access.
To remediate this finding, complete the following steps:
Go to the Cloud Storage browser page in the Google Cloud console.
In the list of buckets, click the name of the desired bucket.
Click the Configuration tab.
Under Permissions, in the row for Access control, click
Edit access control model.In the dialog, select Uniform.
Click Save.
supported assets and scan settings.
Learn about this finding type'sCloud Asset API disabled
Category name in the API: CLOUD_ASSET_API_DISABLED
Cloud Asset Inventory service is not enabled for the project.
To remediate this finding, complete the following steps:
Go to the API Library page in the Google Cloud console.
Search for
Cloud Asset Inventory
.Select the result for Cloud Asset API service.
Ensure that API Enabled is displayed.
Cluster logging disabled
Category name in the API: CLUSTER_LOGGING_DISABLED
Logging isn't enabled for a GKE cluster.
To help investigate security issues and monitor usage, enable Cloud Logging on your clusters.
Depending on the quantity of information, Cloud Logging costs can be significant. To understand your usage of the service and its cost, see Optimize cost: Cloud operations.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
Select the cluster listed in the Security Health Analytics finding.
Click
Edit.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
On the Legacy Stackdriver Logging or Stackdriver Kubernetes Engine Monitoring drop-down list, select Enabled.
These options aren't compatible. Make sure that you use either Stackdriver Kubernetes Engine Monitoring alone, or Legacy Stackdriver Logging with Legacy Stackdriver Monitoring.
Click Save.
supported assets and scan settings.
Learn about this finding type'sCluster monitoring disabled
Category name in the API: CLUSTER_MONITORING_DISABLED
Monitoring is disabled on GKE clusters.
To help investigate security issues and monitor usage, enable Cloud Monitoring on your clusters.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
Select the cluster listed in the Security Health Analytics finding.
Click
Edit.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
On the Legacy Stackdriver Monitoring or Stackdriver Kubernetes Engine Monitoring drop-down list, select Enabled.
These options aren't compatible. Make sure that you use either Stackdriver Kubernetes Engine Monitoring alone, or Legacy Stackdriver Monitoring with Legacy Stackdriver Logging.
Click Save.
supported assets and scan settings.
Learn about this finding type'sCluster private Google access disabled
Category name in the API: CLUSTER_PRIVATE_GOOGLE_ACCESS_DISABLED
Cluster hosts are not configured to use only private, internal IP addresses to access Google APIs.
Private Google Access enables virtual machine (VM) instances with only private, internal IP addresses to reach the public IP addresses of Google APIs and services. For more information, see Configuring Google Private Access.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the Virtual Private Cloud networks page in the Google Cloud console.
In the list of networks, click the name of the desired network.
On the VPC network details page, click the Subnets tab.
In the list of subnets, click the name of the subnet associated with the Kubernetes cluster in the finding.
On the Subnet details page, click
Edit.Under Private Google Access, select On.
Click Save.
To remove public (external) IPs from VM instances whose only external traffic is to Google APIs, see Unassigning a static external IP address.
supported assets and scan settings.
Learn about this finding type'sCluster secrets encryption disabled
Category name in the API: CLUSTER_SECRETS_ENCRYPTION_DISABLED
Application-layer secrets encryption is disabled on a GKE cluster.
Application-layer secrets encryption ensures GKE secrets are encrypted using Cloud KMS keys. The feature provides an additional layer of security for sensitive data, such as user-defined secrets and secrets required for the operation of the cluster, such as service account keys, which are all stored in etcd.
To remediate this finding, complete the following steps:
Go to the Cloud KMS keys page in the Google Cloud console.
Review your application keys or create a database encryption key (DEK). For more information, see Creating a Cloud KMS key.
Go to the Kubernetes clusters page.
Select the cluster in the finding.
Under Security, in the Application-layer secrets encryption field, click
Edit Application-layer Secrets Encryption.Select the Enable Application-layer Secrets Encryption checkbox, and then choose the DEK you created.
Click Save Changes.
supported assets and scan settings.
Learn about this finding type'sCluster shielded nodes disabled
Category name in the API: CLUSTER_SHIELDED_NODES_DISABLED
Shielded GKE nodes are not enabled for a cluster.
Without Shielded GKE nodes, attackers can exploit a vulnerability in a Pod to exfiltrate bootstrap credentials and impersonate nodes in your cluster. The vulnerability can give attackers access to cluster secrets.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
Select the cluster in the finding.
Under Security, in the Shielded GKE nodes field, click
Edit Shielded GKE nodes.Select the Enable Shielded GKE nodes checkbox.
Click Save Changes.
supported assets and scan settings.
Learn about this finding type'sCompute project wide SSH keys allowed
Category name in the API: COMPUTE_PROJECT_WIDE_SSH_KEYS_ALLOWED
Project-wide SSH keys are used, allowing login to all instances in the project.
Using project-wide SSH keys makes SSH key management easier but, if compromised, poses a security risk which can impact all instances within a project. You should use instance-specific SSH keys, which limit the attack surface if SSH keys are compromised. For more information, see Managing SSH keys in metadata.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, click the name of the instance in the finding.
On the VM instance details page, click
Edit.Under SSH Keys, select Block project-wide SSH keys.
Click Save.
supported assets and scan settings.
Learn about this finding type'sCompute Secure Boot disabled
Category name in the API: COMPUTE_SECURE_BOOT_DISABLED
A Shielded VM does not have Secure Boot enabled.
Using Secure Boot helps protect your virtual machines against rootkits and bootkits. Compute Engine does not enable Secure Boot by default because some unsigned drivers and low-level software are not compatible. If your VM does not use incompatible software and it boots with Secure Boot enabled, Google recommends using Secure Boot. If you are using third-party modules with Nvidia drivers, make sure they are compatible with Secure Boot before your enable it.
For more information, see Secure Boot.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, click the name of the instance in the finding.
On the VM instance details page, click
Stop.After the instance stops, click
Edit.Under Shielded VM, select Turn on Secure Boot.
Click Save.
Click
Start to start the instance.
supported assets and scan settings.
Learn about this finding type'sCompute serial ports enabled
Category name in the API: COMPUTE_SERIAL_PORTS_ENABLED
Serial ports are enabled for an instance, allowing connections to the instance's serial console.
If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore, interactive serial console support should be disabled. For more information, see Enabling access for a project.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, click the name of the instance in the finding.
On the VM instance details page, click
Edit.Under Remote access, clear Enable connecting to serial ports.
Click Save.
supported assets and scan settings.
Learn about this finding type'sConfidential Computing disabled
Category name in the API: CONFIDENTIAL_COMPUTING_DISABLED
A Compute Engine instance doesn't have Confidential Computing enabled.
Confidential Computing adds a third pillar to the end-to-end encryption story by encrypting data while in use. With the confidential execution environments provided by Confidential Computing and AMD Secure Encrypted Virtualization (SEV), Google Cloud keeps sensitive code and other data encrypted in memory during processing.
Confidential Computing can only be enabled when an instance is created. Thus, you must delete the current instance and create a new one.
For more information, see Confidential VM and Compute Engine.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, click the name of the instance in the finding.
On the VM instance details page, click
Delete.
supported assets and scan settings.
Learn about this finding type'sCOS not used
Category name in the API: COS_NOT_USED
Compute Engine VMs aren't using the Container-Optimized OS, which is designed to run Docker containers on Google Cloud securely.
Container-Optimized OS is Google's recommended OS for hosting and running containers on Google Cloud. Its small OS footprint minimizes security exposure, while automatic updates patch security vulnerabilities in a timely manner. For more information, see Container-Optimized OS Overview.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
In the list of clusters, click the name of the cluster in the finding.
Click the Nodes tab.
For each node pool:
- Click the name of the node pool to go to its detail page.
- Click Edit .
- Under Nodes -> Image type, click Change.
- Select Container-Optimized OS, and then click Change.
- Click Save.
supported assets and scan settings.
Learn about this finding type'sCustom role not monitored
Category name in the API: CUSTOM_ROLE_NOT_MONITORED
Log metrics and alerts aren't configured to monitor custom role changes.
IAM provides predefined and custom roles that grant access to specific Google Cloud resources. By monitoring role creation, deletion, and update activities, you can identify over-privileged roles at early stages. For more information, see Overview of logs-based metrics.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Create metric
Go to the Logs-based Metrics page in the Google Cloud console.
Click Create Metric.
Under Metric Type, select Counter.
Under Details:
- Set a Log metric name.
- Add a description.
- Set Units to 1.
Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:
resource.type="iam_role" AND (protoPayload.methodName="google.iam.admin.v1.CreateRole" OR protoPayload.methodName="google.iam.admin.v1.DeleteRole" OR protoPayload.methodName="google.iam.admin.v1.UpdateRole")
Click Create Metric. You see a confirmation.
Create Alert Policy
-
In the Google Cloud console, go to the Log-based Metrics page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Under the User-defined metrics section, select the metric you created in the previous section.
-
Click More
, and then click Create alert from metric.The New condition dialog opens with the metric and data transformation options pre-populated.
- Click Next.
- Review the pre-populated settings. You might want to modify the Threshold value.
- Click Condition name and enter a name for the condition.
- Click Next.
To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.
To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.
- Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: Click Documentation, and then add any information that you want included in a notification message.
- Click Alert name and enter a name for the alerting policy.
- Click Create Policy.
supported assets and scan settings.
Learn about this finding type'sDataproc CMEK disabled
Category name in the API: DATAPROC_CMEK_DISABLED
A Dataproc cluster was created without an encryption configuration CMEK. With CMEK, keys that you create and manage in Cloud Key Management Service wrap the keys that Google Cloud uses to encrypt your data, giving you more control over access to your data.
To remediate this finding, complete the following steps:
Go to the Dataproc cluster page in the Google Cloud console.
Select your project and click Create Cluster.
In the Manage security section, click Encryption and the select Customer-managed key.
Select a customer-managed key from the list.
If you don't have a customer-managed key, then you need create one to use. For more information, see Customer-managed encryption keys.
Ensure that the selected KMS key has the Cloud KMS CryptoKey Encrypter/Decrypter role assign to the Dataproc Cluster service account ("serviceAccount:service-project_number@compute-system.iam.gserviceaccount.com").
After the cluster is created, migrate all of your workloads from the older cluster to the new cluster.
Go to Dataproc clusters and select your project.
Select the old cluster and click
Delete cluster.Repeat all steps above for other Dataproc clusters available in the selected project.
supported assets and scan settings.
Learn about this finding type'sDataproc image outdated
Category name in the API: DATAPROC_IMAGE_OUTDATED
A Dataproc cluster was created using a Dataproc image version that is affected by security vulnerabilities in the Apache Log4j 2 utility (CVE-2021-44228 and CVE-2021-45046).
This detector finds vulnerabilities by checking if the
softwareConfig.imageVersion
field in the
config
property of a
Cluster
has any of the following affected versions:
- Image versions earlier than 1.3.95.
- Subminor image versions earlier than 1.4.77, 1.5.53, and 2.0.27.
The version number of a custom Dataproc image can be overridden manually. Consider the following scenarios:
- One can modify the version of an affected custom image to make it appear to be unaffected. In this case, this detector doesn't emit a finding.
- One can override the version of an unaffected custom image with one that is known to have the vulnerability. In this case, this detector emits a false positive finding. To suppress these false positive findings, you can mute them.
To remediate this finding, recreate and update the affected cluster.
supported assets and scan settings.
Learn about this finding type'sDataset CMEK disabled
Category name in the API: DATASET_CMEK_DISABLED
A BigQuery dataset is not configured to use a default customer-managed encryption key (CMEK).
With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google Cloud uses to encrypt your data, giving you more control over access to your data. For more information, see Protecting data with Cloud KMS keys.
To remediate this finding, complete the following steps:
You can't switch a table in place between default encryptions and CMEK encryption. To set a default CMEK key with which to encrypt all new tables in the dataset, follow the instructions to Set a dataset default key.
Setting a default key will not retroactively re-encrypt tables currently in the dataset with a new key. To use CMEK for existing data, do the following:
- Create a new dataset.
- Set a default CMEK key on the dataset you created.
- To copy tables to your CMEK-enabled dataset, follow the instructions for Copying a table.
- After copying data successfully, delete the original datasets.
supported assets and scan settings.
Learn about this finding type'sDefault network
Category name in the API: DEFAULT_NETWORK
The default network exists in a project.
Default networks have automatically created firewall rules and network configurations which might not be secure. For more information, see Default network.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the VPC networks page in the Google Cloud console.
In the list of networks, click the name of the default network.
In the VPC network details page, click
Delete VPC Network.To create a new network with custom firewall rules, see Creating networks.
supported assets and scan settings.
Learn about this finding type'sDefault service account used
Category name in the API: DEFAULT_SERVICE_ACCOUNT_USED
A Compute Engine instance is configured to use the default service account.
The default Compute Engine service account has the Editor role on the project, which allows read and write access to most Google Cloud services. To defend against privilege escalations and unauthorized access, don't use the default Compute Engine service account. Instead, create a new service account and assign only the permissions needed by your instance. Read Access control for information on roles and permissions.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
Select the instance related to the Security Health Analytics finding.
On the Instance details page that loads, click
Stop.After the instance stops, click
Edit.Under the Service Account section, select a service account other than the default Compute Engine service account. You might first need to create a new service account. Read Access control for information on IAM roles and permissions.
Click Save. The new configuration appears on the Instance details page.
supported assets and scan settings.
Learn about this finding type'sDisk CMEK disabled
Category name in the API: DISK_CMEK_DISABLED
Disks on this VM are not encrypted with customer-managed encryption keys (CMEK).
With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google Cloud uses to encrypt your data, giving you more control over access to your data. For more information, see Protecting Resources with Cloud KMS Keys.
To remediate this finding, complete the following steps:
Go to the Compute Engine disks page in the Google Cloud console.
In the list of disks, click the name of the disk indicated in the finding.
On the Manage disk page, click
Delete.To create a new disk with CMEK enabled, see Encrypt a new persistent disk with your own keys. CMEK incurs additional costs related to Cloud KMS.
supported assets and scan settings.
Learn about this finding type'sDisk CSEK disabled
Category name in the API: DISK_CSEK_DISABLED
Disks on this VM are not encrypted with Customer-Supplied Encryption Keys (CSEK). Disks for critical VMs should be encrypted with CSEK.
If you provide your own encryption keys, Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. For more information, see Customer-Supplied Encryption Keys. CSEK incurs additional costs related to Cloud KMS.
To remediate this finding, complete the following steps:
Delete and create disk
You can only encrypt new persistent disks with your own key. You cannot encrypt existing persistent disks with your own key.
Go to the Compute Engine disks page in the Google Cloud console.
In the list of disks, click the name of the disk indicated in the finding.
On the Manage disk page, click
Delete.To create a new disk with CSEK enabled, see Encrypt disks with customer- supplied encryption keys.
Complete the remaining steps to enable the detector.
Enable the detector
Go to Security Command Center's Assets page in the Google Cloud console.
In the Resource type section of the Quick filters panel, select compute.Disk.
If you don't see compute.Disk, click View more, enter
Disk
in the search field, and then click Apply.The Results panel updates to show only instances of the
compute.Disk
resource type.In the Display name column, select the box next to the name of the disk you want to use with CSEK, and then click Set Security Marks.
In the dialog, click Add Mark.
In the key field, enter
enforce_customer_supplied_disk_encryption_keys
, and in the value field, entertrue
.Click Save.
supported assets and scan settings.
Learn about this finding type'sDNS logging disabled
Category name in the API: DNS_LOGGING_DISABLED
Monitoring of Cloud DNS logs provides visibility to DNS names requested by the clients within the VPC network. These logs can be monitored for anomalous domain names and evaluated against threat intelligence. We recommend enabling DNS logging for VPC networks.
Depending on the quantity of information, Cloud DNS logging costs can be significant. To understand your usage of the service and its cost, see Pricing for Google Cloud Observability: Cloud Logging.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the VPC networks page in the Google Cloud console.
In the list of networks, click the name of the VPC network.
Create a new server policy (if one doesn't exist) or edit an existing policy:
If the network doesn't have a DNS server policy, complete the following steps:
- Click Edit.
- In the DNS server policy field, click Create a new server policy.
- Enter a name for the new server policy.
- Set Logs to On.
- Click Save.
If the network has a DNS server policy, complete the following steps:
- In the DNS server policy field, click the name of the DNS policy.
- Click Edit policy.
- Set Logs to On.
- Click Save.
supported assets and scan settings.
Learn about this finding type'sDNSSEC disabled
Category name in the API: DNSSEC_DISABLED
Domain Name System Security Extensions (DNSSEC) is disabled for Cloud DNS zones.
DNSSEC validates DNS responses and mitigates risks, such as DNS hijacking and person-in-the-middle attacks, by cryptographically signing DNS records. You should enable DNSSEC. For more information, see DNS Security Extensions (DNSSEC) overview.
To remediate this finding, complete the following steps:
Go to the Cloud DNS page in the Google Cloud console.
Locate the row with the DNS zone indicated in the finding.
Click the DNSSEC setting in the row and then, under DNSSEC, select On.
Read the dialog that appears. If satisfied, click Enable.
supported assets and scan settings.
Learn about this finding type'sEffectively Anonymous Users Granted GKE Cluster Access
Category name in the API: GKE_PRIVILEGE_ESCALATION_DEFAULT_USERS_GROUPS
Someone created an RBAC binding that references one of the following users or groups:
system:anonymous
system:authenticated
system:unauthenticated
These users and groups are effectively anonymous and shouldn't be used in RoleBindings or ClusterRoleBindings. For details, see Avoid default roles and groups.
To remediate this finding, apply the following steps to your affected resources:
- Open the manifest for each affected ClusterRoleBinding or RoleBinding.
- Set the following restricted fields to one of the allowed values.
Restricted fields
subjects[*].name
Allowed values
- Any groups, users, or service accounts not including
system:anonymous
,system:authenticated
, orsystem:unauthenticated
.
Egress deny rule not set
Category name in the API: EGRESS_DENY_RULE_NOT_SET
An egress deny rule is not set on a firewall.
A firewall that denies all egress network traffic prevents any unwanted outbound network connections, except those connections other firewalls explicitly authorize. For more information, see Egress cases.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
Click Create Firewall Rule.
Give the firewall a name and, optionally, a description.
Under Direction of traffic, select Egress.
Under Action on match, select Deny.
In the Targets drop-down menu, select All instances in the network.
In the Destination filter drop-down menu, select IP ranges, and then type
0.0.0.0/0
into the Destination IP ranges box.Under Protocols and ports, select Deny all.
Click Disable Rule then, under Enforcement, select Enabled.
Click Create.
supported assets and scan settings.
Learn about this finding type'sEssential contacts not configured
Category name in the API: ESSENTIAL_CONTACTS_NOT_CONFIGURED
Your organization has not designated a person or group to receive notifications from Google Cloud about important events such as attacks, vulnerabilities, and data incidents within your Google Cloud organization. We recommend that you designate as an essential contact one or more persons or groups in your business organization.
To remediate this finding, complete the following steps:
Go to the Essential Contacts page in the Google Cloud console.
Make sure the organization appears in the resource selector at the top of the page. The resource selector tells you what project, folder, or organization you are currently managing contacts for.
Click +Add contact. The Add a contact panel opens.
In the Email and Confirm Email fields, enter the email address of the contact.
From the Notification categories section, select the notification categories that you want the contact to receive communications for. Ensure that appropriate email addresses are configured for each of the following notification categories:
- Legal
- Security
- Suspension
- Technical
Click Save.
supported assets and scan settings.
Learn about this finding type'sFirewall not monitored
Category name in the API: FIREWALL_NOT_MONITORED
Log metrics and alerts aren't configured to monitor VPC Network Firewall rule changes.
Monitoring firewall rules creation and update events gives you insight into network access changes, and can help you quickly detect suspicious activity. For more information, see Overview of logs-based metrics.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Create metric
Go to the Logs-based Metrics page in the Google Cloud console.
Click Create Metric.
Under Metric Type, select Counter.
Under Details:
- Set a Log metric name.
- Add a description.
- Set Units to 1.
Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:
resource.type="gce_firewall_rule" AND (protoPayload.methodName:"compute.firewalls.insert" OR protoPayload.methodName:"compute.firewalls.patch" OR protoPayload.methodName:"compute.firewalls.delete")
Click Create Metric. You see a confirmation.
Create Alert Policy
-
In the Google Cloud console, go to the Log-based Metrics page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Under the User-defined metrics section, select the metric you created in the previous section.
-
Click More
, and then click Create alert from metric.The New condition dialog opens with the metric and data transformation options pre-populated.
- Click Next.
- Review the pre-populated settings. You might want to modify the Threshold value.
- Click Condition name and enter a name for the condition.
- Click Next.
To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.
To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.
- Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: Click Documentation, and then add any information that you want included in a notification message.
- Click Alert name and enter a name for the alerting policy.
- Click Create Policy.
supported assets and scan settings.
Learn about this finding type'sFirewall rule logging disabled
Category name in the API: FIREWALL_RULE_LOGGING_DISABLED
Firewall rules logging is disabled.
Firewall rules logging lets you audit, verify, and analyze the effects of your firewall rules. It can be useful for auditing network access or providing early warning that the network is being used in an unapproved manner. The cost of logs can be significant. For more information on Firewall Rules Logging and its cost, see Using Firewall Rules Logging.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the desired firewall rule.
Click
Edit.Under Logs, select On.
Click Save.
supported assets and scan settings.
Learn about this finding type'sFlow logs disabled
Category name in the API: FLOW_LOGS_DISABLED
There is a VPC subnetwork that has flow logs disabled.
VPC Flow Logs record a sample of network flows sent from and received by VM instances. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. For more information about flow logs and their cost, see Using VPC Flow Logs.
To remediate this finding, complete the following steps:
Go to the VPC networks page in the Google Cloud console.
In the list of networks, click the name of the desired network.
On the VPC network details page, click the Subnets tab.
In the list of subnets, click the name of the subnet indicated in the finding.
On the Subnet details page, click
Edit.Under Flow logs, select On.
supported assets and scan settings.
Learn about this finding type'sFlow logs settings not recommended
Category name in the API: VPC_FLOW_LOGS_SETTINGS_NOT_RECOMMENDED
In the configuration of a subnet in a VPC network, the VPC Flow Logs service is either off or is not configured according to CIS Benchmark 1.3 recommendations. VPC Flow Logs records a sample of network flows sent from and received by VM instances which can be used to detect threats.
For more information about VPC Flow Logs and their cost, see Using VPC Flow Logs.
To remediate this finding, complete the following steps:
Go to the VPC networks page in the Google Cloud console.
In the list of networks, click the name of the network.
On the VPC network details page, click the Subnets tab.
In the list of subnets, click the name of the subnet indicated in the finding.
On the Subnet details page, click
Edit.Under Flow logs, select On.
- Optionally, modify the configuration of the logs by clicking on the
Configure logs button to expand the tab. The CIS Benchmarks
recommend the following settings:
- Set the Aggregation Interval to 5 SEC.
- In the Additional fields checkbox, select the Include metadata option.
- Set the Sample rate to 100%.
- Click on the SAVE button.
- Optionally, modify the configuration of the logs by clicking on the
Configure logs button to expand the tab. The CIS Benchmarks
recommend the following settings:
supported assets and scan settings.
Learn about this finding type'sFull API access
Category name in the API: FULL_API_ACCESS
A Compute Engine instance is configured to use the default service account with full access to all Google Cloud APIs.
An instance configured with the default service account and the API access scope set to Allow full access to all Cloud APIs might allow users to perform operations or API calls for which they don't have IAM permissions. For more information, see Compute Engine default service account.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, click the name of the instance in the finding.
If the instance is running, click
Stop.When the instance is stopped, click
Edit.In the Security and access section, under Service accounts, select Compute Engine default service account.
Under Access scopes, select either Allow default access or Set access for each API. This limits the APIs that any process or workload that uses the default VM service account can access.
If you selected Set access for each API, do the following:
- Disable Cloud Platform by setting it to None.
- Enable the specific APIs that the default VM service account requires access to.
Click Save.
Click
Start to start the instance.
supported assets and scan settings.
Learn about this finding type'sHTTP load balancer
Category name in the API: HTTP_LOAD_BALANCER
A Compute Engine instance uses a load balancer that is configured to use a target HTTP proxy instead of a target HTTPS proxy.
To protect the integrity of your data and prevent intruders from tampering with your communications, configure your HTTP(S) load balancers to allow only HTTPS traffic. For more information, see External HTTP(S) Load Balancing overview.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the Target proxies page in the Google Cloud console.
In the list of target proxies, click the name of the target proxy in the finding.
Click the link under the URL map.
Click
Edit.Click Frontend configuration.
Delete all Frontend IP and port configurations that allow HTTP traffic and create new ones that allow HTTPS traffic.
supported assets and scan settings.
Learn about this finding type'sInstance OS login disabled
Category name in the API: INSTANCE_OS_LOGIN_DISABLED
OS Login is disabled on this Compute Engine instance.
OS Login enables centralized SSH key management with IAM, and it disables metadata-based SSH key configuration on all instances in a project. Learn how to set up and configure OS Login.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, click the name of the instance in the finding.
On the Instance details page that loads, click
Stop.After the instance stops, click
Edit.In the Custom metadata section, ensure that the item with the key enable-oslogin has the value TRUE.
Click Save.
Click
Start to start the instance.
supported assets and scan settings.
Learn about this finding type'sIntegrity monitoring disabled
Category name in the API: INTEGRITY_MONITORING_DISABLED
Integrity monitoring is disabled on a GKE cluster.
Integrity monitoring lets you monitor and verify the runtime boot integrity of your shielded nodes using Monitoring. This lets you respond to integrity failures and prevent compromised nodes from being deployed into the cluster.
To remediate this finding, complete the following steps:
Once a node is provisioned, it can't be updated to enable integrity monitoring. You must create a new node pool with integrity monitoring enabled.
Go to the Kubernetes clusters page in the Google Cloud console.
Click on the name of the cluster in the finding.
Click on Add Node Pool.
Under the Security tab, ensure the Enable integrity monitoring is enabled.
Click Create.
To migrate your workloads from the existing non-conforming node pools to the new node pools, see Migrating workloads to different machine types.
After your workloads have been moved, delete the original non-conforming node pool.
- On the Kubernetes cluster page, in the Node pools menu, click the name of the node pool you want to delete.
- Click Remove node pool.
supported assets and scan settings.
Learn about this finding type'sIntranode visibility disabled
Category name in the API: INTRANODE_VISIBILITY_DISABLED
Intranode visibility is disabled for a GKE cluster.
Enabling intranode visibility makes your intranode Pod-to-Pod traffic visible to the networking fabric. With this feature, you can use VPC flow logging or other VPC features to monitor or control intranode traffic. To get logs, you need to enable VPC flow logs in the selected subnetwork. For more information, see Using VPC flow logs.
To remediate this finding, complete the following steps:
Once a node is provisioned, it can't be updated to enable integrity monitoring. You must create a new node pool with integrity monitoring enabled.
Go to the Kubernetes clusters page in the Google Cloud console.
In the Networking section, click
Edit intranode visibility in the Intranode visibility row.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
In the dialog, select Enable Intranode visibility.
Click Save Changes.
supported assets and scan settings.
Learn about this finding type'sIP alias disabled
Category name in the API: IP_ALIAS_DISABLED
A GKE cluster was created with alias IP ranges disabled.
When you enable alias IP ranges, GKE clusters allocate IP addresses from a known CIDR block, so your cluster is scalable and interacts better with Google Cloud products and entities. For more information, see Alias IP ranges overview.
To remediate this finding, complete the following steps:
You cannot migrate an existing cluster to use alias IPs. To create a new cluster with alias IPs enabled, do the following:
Go to the Kubernetes clusters page in the Google Cloud console.
Click Create.
From the navigation pane, under Cluster, click Networking.
Under Advanced networking options, select Enable VPC-native traffic routing (uses alias IP).
Click Create.
supported assets and scan settings.
Learn about this finding type'sIP forwarding enabled
Category name in the API: IP_FORWARDING_ENABLED
IP forwarding is enabled on Compute Engine instances.
Prevent data loss or information disclosure by disabling IP forwarding of data packets for your VMs.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, check the box next to the name of the instance in the finding.
Click
Delete.Select Create Instance to create a new instance to replace the one you deleted.
To ensure IP forwarding is disabled, click Management, disks, networking, SSH keys, and then click Networking.
Under Network interfaces, click
Edit.Under IP forwarding, in the drop-down menu, ensure that Off is selected.
Specify any other instance parameters, and then click Create. For more information, see Creating and starting a VM instance.
supported assets and scan settings.
Learn about this finding type'sKMS key not rotated
Category name in the API: KMS_KEY_NOT_ROTATED
Rotation isn't configured on a Cloud KMS encryption key.
Rotating your encryption keys regularly provides protection in case a key gets compromised and limits the number of encrypted messages available to cryptanalysis for a specific key version. For more information, see Key rotation.
To remediate this finding, complete the following steps:
Go to the Cloud KMS keys page in the Google Cloud console.
Click the name of the key ring indicated in the finding.
Click the name of the key indicated in the finding.
Click Edit Rotation Period.
Set the rotation period to a maximum of 90 days.
Click Save.
supported assets and scan settings.
Learn about this finding type'sKMS project has owner
Category name in the API: KMS_PROJECT_HAS_OWNER
A user has roles/Owner
permissions on a project that has cryptographic keys.
For more information, see Permissions and
roles.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the IAM page in the Google Cloud console.
If necessary, select the project in the finding.
For each principal assigned the Owner role:
- Click Edit.
- In the Edit permissions panel, next to the Owner role, click Delete.
- Click Save.
supported assets and scan settings.
Learn about this finding type'sKMS public key
Category name in the API: KMS_PUBLIC_KEY
A Cloud KMS Cryptokey or Cloud KMS Key Ring is public and accessible to anyone on the internet. For more information, see Using IAM with Cloud KMS.
To remediate this finding, if it is related to a Cryptokey:
Go to the Cryptographic Keys page in the Google Cloud console.
Under Name, select the key ring that contains the cryptographic key related to the Security Health Analytics finding.
On the Key ring details page that loads, select the checkbox next to the cryptographic key.
If the INFO PANEL is not displayed, click the SHOW INFO PANEL button.
Use the filter box preceding Role / Principal to search principals for allUsers and allAuthenticatedUsers, and click
Delete to remove access for these principals.
To remediate this finding, if it is related to a Key Ring:
Go to the Cryptographic Keys page in the Google Cloud console.
Find the row with the key ring in the finding and select the checkbox.
If the INFO PANEL is not displayed, click the SHOW INFO PANEL button.
Use the filter box preceding Role / Principal to search principals for allUsers and allAuthenticatedUsers, and click
Delete to remove access for these principals.
supported assets and scan settings.
Learn about this finding type'sKMS role separation
Category name in the API: KMS_ROLE_SEPARATION
This finding isn't available for project-level activations.
One or more principals have multiple Cloud KMS permissions assigned. We recommend that no account simultaneously has Cloud KMS Admin along with other Cloud KMS permissions. For more information, see Permissions and roles.
To remediate this finding, complete the following steps:
Go to the IAM page in the Google Cloud console.
For each principal listed in the finding, do the following:
- Check whether the role was inherited from a folder or organization resource by looking at the Inheritance column. If the column contains a link to a parent resource, click on the link to go to the parent resource's IAM page.
- Click Edit next to a principal.
- To remove permissions, click Delete next to Cloud KMS Admin. If you want to remove all permissions for the principal, click Delete next to all other permissions.
Click Save.
supported assets and scan settings.
Learn about this finding type'sLegacy authorization enabled
Category name in the API: LEGACY_AUTHORIZATION_ENABLED
Legacy Authorization is enabled on GKE clusters.
In Kubernetes, role-based access control (RBAC) lets you define roles with rules containing a set of permissions, and grant permissions at the cluster and namespace level. This feature provides better security by ensuring that users only have access to specific resources. Consider disabling legacy attribute-based access control (ABAC).
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
Select the cluster listed in the Security Health Analytics finding.
Click
Edit.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
On the Legacy Authorization drop-down list, select Disabled.
Click Save.
supported assets and scan settings.
Learn about this finding type'sLegacy metadata enabled
Category name in the API: LEGACY_METADATA_ENABLED
Legacy metadata is enabled on GKE clusters.
Compute Engine's instance metadata server exposes legacy /0.1/
and
/v1beta1/
endpoints, which do not enforce metadata query headers. This is a
feature in the /v1/
APIs that makes it more difficult for a potential attacker
to retrieve instance metadata. Unless required, we recommend you
disable these legacy /0.1/
and /v1beta1/
APIs.
For more information, see Disabling and transitioning from legacy metadata APIs.
To remediate this finding, complete the following steps:
You can only disable legacy metadata APIs when creating a new cluster or when adding a new node pool to an existing cluster. To update an existing cluster and disable legacy metadata APIs, see Migrating workloads to different machine types.
supported assets and scan settings.
Learn about this finding type'sLegacy network
Category name in the API: LEGACY_NETWORK
A legacy network exists in a project.
Legacy networks are not recommended because many new Google Cloud security features are not supported in legacy networks. Instead, use VPC networks. For more information, see Legacy networks.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the VPC networks page in the Google Cloud console.
To create a new non-legacy network, click Create Network.
Return to the VPC networks page.
In the list of networks, click legacy_network.
In the VPC network details page, click
Delete VPC Network.
supported assets and scan settings.
Learn about this finding type'sLoad balancer logging disabled
Category name in the API: LOAD_BALANCER_LOGGING_DISABLED
Logging is disabled for the backend service in a load balancer.
Enabling logging for a load balancer allows you to view HTTP(S) network traffic for your web applications. For more information, see Load balancer.
To remediate this finding, complete the following steps:
Go to the Cloud Load Balancing page in the Google Cloud console.
Click the name of your load balancer.
Click
Edit.Click Backend configuration.
In the Backend configuration page, click
Edit.In the Logging section, select Enable logging and choose the best sample rate for your project.
To finish editing the backend service, click Update.
To finish editing the load balancer, click Update.
supported assets and scan settings.
Learn about this finding type'sLocked retention policy not set
Category name in the API: LOCKED_RETENTION_POLICY_NOT_SET
A locked retention policy is not set for logs.
A locked retention policy prevents logs from being overwritten and the log bucket from being deleted. For more information, see Bucket Lock.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the Storage Browser page in the Google Cloud console.
Select the bucket listed in the Security Health Analytics finding.
On the Bucket details page, click the Retention tab.
If a retention policy has not been set, click Set Retention Policy.
Enter a retention period.
Click Save. The retention policy is shown in the Retention tab.
Click Lock to ensure the retention period is not shortened or removed.
supported assets and scan settings.
Learn about this finding type'sLog not exported
Category name in the API: LOG_NOT_EXPORTED
A resource doesn't have an appropriate log sink configured.
Cloud Logging helps you quickly find the root cause of issues in your system and applications. However, most logs are only retained for 30 days by default. Export copies of all log entries to extend the storage period. For more information, see Overview of log exports.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the Log Router page in the Google Cloud console.
Click Create Sink.
To ensure that all logs are exported, leave the inclusion and exclusion filters empty.
Click Create Sink.
supported assets and scan settings.
Learn about this finding type'sMaster authorized networks disabled
Category name in the API: MASTER_AUTHORIZED_NETWORKS_DISABLED
Control Plane Authorized Networks is not enabled on GKE clusters.
Control Plane Authorized Networks improves security for your container cluster by blocking specified IP addresses from accessing your cluster's control plane. For more information, see Adding authorized networks for control plane access.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
Select the cluster listed in the Security Health Analytics finding.
Click
Edit.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
On the Control Plane Authorized Networks drop-down list, select Enabled.
Click Add authorized network.
Specify the authorized networks you want to use.
Click Save.
supported assets and scan settings.
Learn about this finding type'sMFA not enforced
Category name in the API: MFA_NOT_ENFORCED
This finding isn't available for project-level activations.
Multi-factor authentication, specifically 2-Step Verification (2SV), is disabled for some users in your organization.
Multi-factor authentication is used to protect accounts from unauthorized access and is the most important tool for protecting your organization against compromised login credentials. For more information, see Protect your business with 2-Step Verification.
To remediate this finding, complete the following steps:
Go to the Admin console page in the Google Cloud console.
Enforce 2-Step Verification for all organizational units.
Suppress findings of this type
To suppress finding of this type, define a mute rule that automatically mutes future findings of this type. For more information, see Mute findings in Security Command Center.
Although it is not a recommended way to suppress findings, you can also add dedicated security marks to assets so that Security Health Analytics detectors don't create security findings for those assets.
- To prevent this finding from being activated again, add the security mark
allow_mfa_not_enforced
with a value oftrue
to the asset. - To ignore potential violations for specific organizational units, add the
excluded_orgunits
security mark to the asset with a comma-separated list of organizational unit paths in the value field. For example,excluded_orgunits:/people/vendors/vendorA,/people/contractors/contractorA
.
supported assets and scan settings.
Learn about this finding type'sNetwork not monitored
Category name in the API: NETWORK_NOT_MONITORED
Log metrics and alerts aren't configured to monitor VPC network changes.
To detect incorrect or unauthorized changes to your network setup, monitor VPC network changes. For more information, see Overview of logs- based metrics.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Create metric
Go to the Logs-based Metrics page in the Google Cloud console.
Click Create Metric.
Under Metric Type, select Counter.
Under Details:
- Set a Log metric name.
- Add a description.
- Set Units to 1.
Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:
resource.type="gce_network" AND (protoPayload.methodName:"compute.networks.insert" OR protoPayload.methodName:"compute.networks.patch" OR protoPayload.methodName:"compute.networks.delete" OR protoPayload.methodName:"compute.networks.removePeering" OR protoPayload.methodName:"compute.networks.addPeering")
Click Create Metric. You see a confirmation.
Create Alert Policy
-
In the Google Cloud console, go to the Log-based Metrics page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Under the User-defined metrics section, select the metric you created in the previous section.
-
Click More
, and then click Create alert from metric.The New condition dialog opens with the metric and data transformation options pre-populated.
- Click Next.
- Review the pre-populated settings. You might want to modify the Threshold value.
- Click Condition name and enter a name for the condition.
- Click Next.
To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.
To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.
- Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: Click Documentation, and then add any information that you want included in a notification message.
- Click Alert name and enter a name for the alerting policy.
- Click Create Policy.
supported assets and scan settings.
Learn about this finding type'sNetwork policy disabled
Category name in the API: NETWORK_POLICY_DISABLED
Network policy is disabled on GKE clusters.
By default, pod to pod communication is open. Open communication allows pods to
connect directly across nodes, with or without network address translation. A
NetworkPolicy
resource is like a pod-level firewall that restricts connections
between pods, unless the NetworkPolicy
resource explicitly allows the
connection. Learn how to define a network policy.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
Click the name of the cluster listed in the Security Health Analytics finding.
Under Networking, in the row for Calico Kubernetes Network policy, click
Edit.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
In the dialog, select Enable Calico Kubernetes network policy for control plane and Enable Calico Kubernetes network policy for nodes.
Click Save Changes.
supported assets and scan settings.
Learn about this finding type'sNodepool boot CMEK disabled
Category name in the API: NODEPOOL_BOOT_CMEK_DISABLED
Boot disks in this node pool are not encrypted with customer-managed encryption keys (CMEK). CMEK allows the user to configure the default encryption keys for boot disks in a node pool.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
In the list of clusters, click the name of the cluster in the finding.
Click the Nodes tab.
For each default-pool node pool, click
Delete.When prompted to confirm, click Delete.
To create new node pools using CMEK, see Using customer-managed encryption keys (CMEK). CMEK incurs additional costs related to Cloud KMS.
supported assets and scan settings.
Learn about this finding type'sNodepool secure boot disabled
Category name in the API: NODEPOOL_SECURE_BOOT_DISABLED
Secure boot is disabled for a GKE cluster.
Enable Secure Boot for Shielded GKE Nodes to verify the digital signatures of node boot components. For more information, see Secure Boot.
To remediate this finding, complete the following steps:
Once a Node pool is provisioned, it can't be updated to enable Secure Boot. You must create a new Node pool with Secure Boot enabled.
Go to the Kubernetes clusters page in the Google Cloud console.
Click on the name of the cluster in the finding.
Click on Add Node Pool.
In the Node pools menu, do the following:
- Click the name of the new Node pool to expand the tab.
- Select Security, and then, under Shielded options, select Enable secure boot.
- Click Create.
- To migrate your workloads from the existing non-conforming node pools to the new node pools, see Migrating workloads to different machine types.
- After your workloads have been moved, delete the original non-conforming node pool.
supported assets and scan settings.
Learn about this finding type'sNon org IAM member
Category name in the API: NON_ORG_IAM_MEMBER
A user outside of your organization or project has IAM permissions on a project or organization. Learn more about IAM permissions.
To remediate this finding, complete the following steps:
Go to the IAM page in the Google Cloud console.
Select the checkbox next to users outside your organization or project.
Click Remove.
supported assets and scan settings.
Learn about this finding type'sObject versioning disabled
Category name in the API: OBJECT_VERSIONING_DISABLED
Object versioning isn't enabled on a storage bucket where sinks are configured.
To support the retrieval of objects that are deleted or overwritten, Cloud Storage offers the Object Versioning feature. Enable Object Versioning to protect your Cloud Storage data from being overwritten or accidentally deleted. Learn how to Enable Object Versioning.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, use the --versioning
flag in a
gcloud storage buckets update
command with the appropriate value:
gcloud storage buckets update gs://finding.assetDisplayName --versioning
Replace finding.assetDisplayName
with the name of the
relevant bucket.
supported assets and scan settings.
Learn about this finding type'sOpen Cassandra port
Category name in the API: OPEN_CASSANDRA_PORT
Firewall rules that allow any IP address to connect to Cassandra ports might expose your Cassandra services to attackers. For more information, see VPC firewall rules overview.
The Cassandra service ports are:
TCP - 7000, 7001, 7199, 8888, 9042, 9160, 61620, 61621
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen ciscosecure websm port
Category name in the API: OPEN_CISCOSECURE_WEBSM_PORT
Firewall rules that allow any IP address to connect to CiscoSecure/WebSM ports might expose your CiscoSecure/WebSM services to attackers. For more information, see VPC firewall rules overview.
The CiscoSecure/WebSM service ports are:
TCP - 9090
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen directory services port
Category name in the API: OPEN_DIRECTORY_SERVICES_PORT
Firewall rules that allow any IP address to connect to Directory ports might expose your Directory services to attackers. For more information, see VPC firewall rules overview.
The Directory service ports are:
TCP - 445
UDP - 445
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen DNS port
Category name in the API: OPEN_DNS_PORT
Firewall rules that allow any IP address to connect to DNS ports might expose your DNS services to attackers. For more information, see VPC firewall rules overview.
The DNS service ports are:
TCP - 53
UDP - 53
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen Elasticsearch port
Category name in the API: OPEN_ELASTICSEARCH_PORT
Firewall rules that allow any IP address to connect to Elasticsearch ports might expose your Elasticsearch services to attackers. For more information, see VPC firewall rules overview.
The Elasticsearch service ports are:
TCP - 9200, 9300
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen firewall
Category name in the API: OPEN_FIREWALL
Firewall rules that allow connections from all IP addresses, like 0.0.0.0/0
,
or from all ports can unnecessarily expose resources to attacks from unintended
sources. These rules should be removed or scoped explicitly to the intended
source IP ranges or ports. For example, in applications intended to be public,
consider restricting allowed ports to those needed for the application, like 80
and 443. If your application needs to allow connections from all IP addresses or
ports, consider adding the asset to an allowlist. Learn more about
Updating firewall rules.
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall rules page in the Google Cloud console.
Click the firewall rule listed in the Security Health Analytics finding, and then click
Edit.Under Source IP ranges, edit the IP values to restrict the range of IPs that is allowed.
Under Protocols and ports, select Specified protocols and ports, select the allowed protocols, and enter ports that are allowed.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen FTP port
Category name in the API: OPEN_FTP_PORT
Firewall rules that allow any IP address to connect to FTP ports might expose your FTP services to attackers. For more information, see VPC firewall rules overview.
The FTP service ports are:
TCP - 21
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen group IAM member
Category name in the API: OPEN_GROUP_IAM_MEMBER
One or more principals that have access to an organization, project, or folder are Google Groups accounts that can be joined without approval.
Google Cloud customers can use Google Groups to manage roles and permissions for members in their organizations, or apply access policies to collections of users. Instead of granting roles directly to members, administrators can grant roles and permissions to Google Groups, and then add members to specific groups. Group members inherit all of a group's roles and permissions, which lets members access specific resources and services.
If an open Google Groups account is used as a principal in an IAM binding, anyone can inherit the associated role just by joining the group directly or indirectly (through a subgroup). We recommend revoking the roles of the open groups or restricting access to those groups.
To remediate this finding, perform one of the following procedures.
Remove the group from the IAM policy
Go to the IAM page in the Google Cloud console.
If necessary, select the project, folder, or organization in the finding.
Revoke the role of each open group identified in the finding.
Restrict access to the open groups
- Sign in to Google Groups.
- Update the settings of each open group, and its subgroups, to specify who can join the group and who must approve them.
supported assets and scan settings.
Learn about this finding type'sOpen HTTP port
Category name in the API: OPEN_HTTP_PORT
Firewall rules that allow any IP address to connect to HTTP ports might expose your HTTP services to attackers. For more information, see VPC firewall rules overview.
The HTTP service ports are:
TCP - 80
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen LDAP port
Category name in the API: OPEN_LDAP_PORT
Firewall rules that allow any IP address to connect to LDAP ports might expose your LDAP services to attackers. For more information, see VPC firewall rules overview.
The LDAP service ports are:
TCP - 389, 636
UDP - 389
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen Memcached port
Category name in the API: OPEN_MEMCACHED_PORT
Firewall rules that allow any IP address to connect to Memcached ports might expose your Memcached services to attackers. For more information, see VPC firewall rules overview.
The Memcached service ports are:
TCP - 11211, 11214, 11215
UDP - 11211, 11214, 11215
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen MongoDB port
Category name in the API: OPEN_MONGODB_PORT
Firewall rules that allow any IP address to connect to MongoDB ports might expose your MongoDB services to attackers. For more information, see VPC firewall rules overview.
The MongoDB service ports are:
TCP - 27017, 27018, 27019
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen MySQL port
Category name in the API: OPEN_MYSQL_PORT
Firewall rules that allow any IP address to connect to MySQL ports might expose your MySQL services to attackers. For more information, see VPC firewall rules overview.
The MySQL service ports are:
TCP - 3306
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen NetBIOS port
Category name in the API: `OPEN_NETBIOS_PORT
Firewall rules that allow any IP address to connect to NetBIOS ports might expose your NetBIOS services to attackers. For more information, see VPC firewall rules overview.
The NetBIOS service ports are:
TCP - 137, 138, 139
UDP - 137, 138, 139
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen OracleDB port
Category name in the API: OPEN_ORACLEDB_PORT
Firewall rules that allow any IP address to connect to OracleDB ports might expose your OracleDB services to attackers. For more information, see VPC firewall rules overview.
The OracleDB service ports are:
TCP - 1521, 2483, 2484
UDP - 2483, 2484
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen POP3 port
Category name in the API: OPEN_POP3_PORT
Firewall rules that allow any IP address to connect to POP3 ports might expose your POP3 services to attackers. For more information, see VPC firewall rules overview.
The POP3 service ports are:
TCP - 110
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen PostgreSQL port
Category name in the API: OPEN_POSTGRESQL_PORT
Firewall rules that allow any IP address to connect to PostgreSQL ports might expose your PostgreSQL services to attackers. For more information, see VPC firewall rules overview.
The PostgreSQL service ports are:
TCP - 5432
UDP - 5432
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen RDP port
Category name in the API: OPEN_RDP_PORT
Firewall rules that allow any IP address to connect to RDP ports might expose your RDP services to attackers. For more information, see VPC firewall rules overview.
The RDP service ports are:
TCP - 3389
UDP - 3389
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen Redis port
Category name in the API: OPEN_REDIS_PORT
Firewall rules that allow any IP address to connect to Redis ports might expose your Redis services to attackers. For more information, see VPC firewall rules overview.
The Redis service ports are:
TCP - 6379
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen SMTP port
Category name in the API: OPEN_SMTP_PORT
Firewall rules that allow any IP address to connect to SMTP ports might expose your SMTP services to attackers. For more information, see VPC firewall rules overview.
The SMTP service ports are:
TCP - 25
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen SSH port
Category name in the API: OPEN_SSH_PORT
Firewall rules that allow any IP address to connect to SSH ports might expose your SSH services to attackers. For more information, see VPC firewall rules overview.
The SSH service ports are:
SCTP - 22
TCP - 22
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOpen Telnet port
Category name in the API: OPEN_TELNET_PORT
Firewall rules that allow any IP address to connect to Telnet ports might expose your Telnet services to attackers. For more information, see VPC firewall rules overview.
The Telnet service ports are:
TCP - 23
This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled.
To remediate this finding, complete the following steps:
Go to the Firewall page in the Google Cloud console.
In the list of firewall rules, click the name of the firewall rule in the finding.
Click
Edit.Under Source IP ranges, delete
0.0.0.0/0
.Add specific IP addresses or IP ranges that you want to let connect to the instance.
Add specific protocols and ports you want to open on your instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sOrg policy Confidential VM policy
Category name in the API: ORG_POLICY_CONFIDENTIAL_VM_POLICY
A Compute Engine resource is out of compliance with the
constraints/compute.restrictNonConfidentialComputing
organization
policy. For more information about this org policy constraint, see Enforcing
organization policy constraints.
Your organization requires this VM to have the Confidential VM service enabled. VMs that don't have this service enabled will not use runtime memory encryption, exposing them to runtime memory attacks.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, click the name of the instance in the finding.
If the VM doesn't require the Confidential VM service, move it to a new folder or project.
If the VM requires Confidential VM, click
Delete.To create a new instance with Confidential VM enabled, see Quickstart: Creating a Confidential VM instance.
supported assets and scan settings.
Learn about this finding type'sOrg policy location restriction
Category name in the API: ORG_POLICY_LOCATION_RESTRICTION
The Organization Policy gcp.resourceLocations
constraint lets you restrict
the creation of new resources to Cloud Regions you select.
For more information, see Restricting resource locations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
The ORG_POLICY_LOCATION_RESTRICTION
detector covers many resource
types and remediation
instructions are different for each resource. The general approach to remediate
location violations includes the following:
- Copy, move, or back up the out-of-region resource or its data into a resource that is in-region. Read the documentation for individual services to get instructions on moving resources.
- Delete the original out-of-region resource or its data.
This approach is not possible for all resource types. For guidance, consult the customized recommendations that are provided in the finding.
Additional considerations
When remediating this finding, consider the following.
Managed resources
The lifecycles of resources are sometimes managed and controlled by other resources. For example, a managed Compute Engine instance group creates and destroys Compute Engine instances in accordance with the instance group's autoscaling policy. If managed and managing resources are in-scope for location enforcement, both might be flagged as violating the Organization Policy. Remediation of findings for managed resources should be done on the managing resource to ensure operational stability.
Resources in-use
Certain resources are used by other resources. For example, a Compute Engine disk that is attached to a running Compute Engine instance is considered to be in-use by the instance. If the resource in-use violates the location Organization Policy, you need to ensure that the resource is not in-use before addressing the location violation.
supported assets and scan settings.
Learn about this finding type'sOS login disabled
Category name in the API: OS_LOGIN_DISABLED
OS Login is disabled on this Compute Engine instance.
OS Login enables centralized SSH key management with IAM, and it disables metadata-based SSH key configuration on all instances in a project. Learn how to set up and configure OS Login.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the Metadata page in the Google Cloud console.
Click Edit, and then click Add item.
Add an item with key enable-oslogin and value TRUE.
supported assets and scan settings.
Learn about this finding type'sOver privileged account
Category name in the API: OVER_PRIVILEGED_ACCOUNT
A GKE node is using the Compute Engine default service node, which has broad access by default and might be over-privileged for running your GKE cluster.
To remediate this finding, complete the following steps:
Follow the instructions to Use least privilege Google service accounts.
supported assets and scan settings.
Learn about this finding type'sOver privileged scopes
Category name in the API: OVER_PRIVILEGED_SCOPES
A node service account has broad access scopes.
Access scopes are the legacy method of specifying permissions for your instance. To reduce the possibility of a privilege escalation in an attack, create and use a minimally privileged service account to run your GKE cluster.
To remediate this finding, follow the instructions to Use least privilege Google service accounts.
supported assets and scan settings.
Learn about this finding type'sOver privileged service account user
Category name in the API: OVER_PRIVILEGED_SERVICE_ACCOUNT_USER
A user has the iam.serviceAccountUser
or iam.serviceAccountTokenCreator
roles at the project, folder, or organization level, instead of for a
specific service account.
Granting those roles to a user for a project, folder, or organization gives the user access to all existing and future service accounts at that scope. This situation can result in unintended escalation of privileges. For more information, see Service account permissions.
To remediate this finding, complete the following steps:
Go to the IAM page in the Google Cloud console.
If necessary, select the project, folder, or organization in the finding.
For each principal assigned
roles/iam.serviceAccountUser
orroles/iam.serviceAccountTokenCreator
, do the following:- Click Edit.
- In the Edit permissions panel, next to the roles, click Delete.
- Click Save.
Follow this guide to grant individual users permission to impersonate a single service account. You need to follow the guide for each service account you want to allow chosen users to impersonate.
supported assets and scan settings.
Learn about this finding type'sOwner not monitored
Category name in the API: OWNER_NOT_MONITORED
Log metrics and alerts aren't configured to monitor Project Ownership assignments or changes.
The IAM Owner role has the highest level of privilege on a project. To secure your resources, set up alerts to get notified when new owners are added or removed. For more information, see Overview of logs-based metrics.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Create metric
Go to the Logs-based Metrics page in the Google Cloud console.
Click Create Metric.
Under Metric Type, select Counter.
Under Details:
- Set a Log metric name.
- Add a description.
- Set Units to 1.
Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:
(protoPayload.serviceName="cloudresourcemanager.googleapis.com") AND (ProjectOwnership OR projectOwnerInvitee) OR (protoPayload.serviceData.policyDelta.bindingDeltas.action="REMOVE" AND protoPayload.serviceData.policyDelta.bindingDeltas.role="roles/owner") OR (protoPayload.serviceData.policyDelta.bindingDeltas.action="ADD" AND protoPayload.serviceData.policyDelta.bindingDeltas.role="roles/owner")
Click Create Metric. You see a confirmation.
Create Alert Policy
-
In the Google Cloud console, go to the Log-based Metrics page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Under the User-defined metrics section, select the metric you created in the previous section.
-
Click More
, and then click Create alert from metric.The New condition dialog opens with the metric and data transformation options pre-populated.
- Click Next.
- Review the pre-populated settings. You might want to modify the Threshold value.
- Click Condition name and enter a name for the condition.
- Click Next.
To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.
To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.
- Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: Click Documentation, and then add any information that you want included in a notification message.
- Click Alert name and enter a name for the alerting policy.
- Click Create Policy.
supported assets and scan settings.
Learn about this finding type'sPod security policy disabled
Category name in the API: POD_SECURITY_POLICY_DISABLED
The PodSecurityPolicy
is disabled on a GKE cluster.
A PodSecurityPolicy
is an admission controller resource that validates
requests to create and update pods on a cluster. Clusters won't accept pods that
don't meet the conditions defined in the PodSecurityPolicy
.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, define and authorize PodSecurityPolicies
, and
enable the PodSecurityPolicy
controller. For instructions, see
Using PodSecurityPolicies
.
supported assets and scan settings.
Learn about this finding type'sPrimitive roles used
Category name in the API: PRIMITIVE_ROLES_USED
A user has one of the following IAM basic roles: roles/owner
,
roles/editor
, or roles/viewer
. These roles are too permissive and shouldn't
be used. Instead, they should be assigned per project only.
For more information, see Understanding roles.
To remediate this finding, complete the following steps:
Go to the IAM policy page in the Google Cloud console.
For each user assigned a primitive role, consider using more granular roles instead.
supported assets and scan settings.
Learn about this finding type'sPrivate cluster disabled
Category name in the API: PRIVATE_CLUSTER_DISABLED
A GKE cluster has a private cluster disabled.
Private clusters allow nodes to only have private IP addresses. This feature limits outbound internet access for nodes. If a cluster node doesn't have a public IP address, it isn't discoverable or exposed to the public internet. You can still route traffic to a node by using an internal load balancer. For more information, see Private clusters.
You can't make an existing cluster private. To remediate this finding, create a new private cluster:
Go to the Kubernetes clusters page in the Google Cloud console.
Click Create Cluster.
In the navigation menu, under Cluster, select Networking.
Select the radio button for Private cluster.
Under Advanced networking options, select the checkbox for Enable VPC-native traffic routing (uses alias IP).
Click Create.
supported assets and scan settings.
Learn about this finding type'sPrivate Google access disabled
Category name in the API: PRIVATE_GOOGLE_ACCESS_DISABLED
There are private subnets without access to Google public APIs.
Private Google Access enables VM instances with only internal (private) IP addresses to reach the public IP addresses of Google APIs and services.
For more information, see Configuring Google Private Access.
To remediate this finding, complete the following steps:
Go to the VPC networks page in the Google Cloud console.
In the list of networks, click the name of the desired network.
On the VPC network details page, click the Subnets tab.
In the list of subnets, click the name of the subnet associated with the Kubernetes cluster in the finding.
On the Subnet details page, click
Edit.Under Private Google Access, select On.
Click Save.
To remove public (external) IPs from VM instances whose only external traffic is to Google APIs, see Unassigning a static external IP address.
supported assets and scan settings.
Learn about this finding type'sPublic bucket ACL
Category name in the API: PUBLIC_BUCKET_ACL
A bucket is public and anyone on the internet can access it.
For more information, see Overview of access control.
To remediate this finding, complete the following steps:
Go to the Storage Browser page in the Google Cloud console.
Select the bucket listed in the Security Health Analytics finding.
On the Bucket details page, click the Permissions tab.
Next to View by, click Roles.
In the Filter box, search for allUsers and allAuthenticatedUsers.
Click
Delete to remove all IAM permissions granted to allUsers and allAuthenticatedUsers.
supported assets and scan settings.
Learn about this finding type'sPublic Compute image
Category name in the API: PUBLIC_COMPUTE_IMAGE
A Compute Engine image is public and anyone on the internet can access it. allUsers represents anyone on the internet and allAuthenticatedUsers represents anyone who is authenticated with a Google account; neither is constrained to users within your organization.
Compute Engine images might contain sensitive information like encryption keys or licensed software. Such sensitive information should not be publicly accessible. If you intended to make this Compute Engine image public, ensure that it does not contain any sensitive information.
For more information, see Access control overview.
To remediate this finding, complete the following steps:
Go to the Compute Engine images page in the Google Cloud console.
Select the box next to the public-image image, and then click Show Info Panel.
In the Filter box, search principals for allUsers and allAuthenticatedUsers.
Expand the role for which you want to remove users.
Click
Delete to remove a user from that role.
supported assets and scan settings.
Learn about this finding type'sPublic dataset
Category name in the API: PUBLIC_DATASET
A BigQuery dataset is public and accessible to anyone on the internet. The IAM principal allUsers represents anyone on the internet and allAuthenticatedUsers represents anyone who is logged into a Google service; neither is constrained to users within your organization.
For more information, see Controlling access to datasets.
To remediate this finding, complete the following steps:
Go to the BigQuery Explorer page in the Google Cloud console.
In the list of datasets, click on the name of the dataset that is identified in the finding. The Dataset info panel opens.
Near the top of the Dataset info panel, click SHARING.
In the drop-down menu, click on Permissions.
In the Dataset Permissions panel, enter allUsers and allAuthenticatedUsers, and remove access for these principals.
supported assets and scan settings.
Learn about this finding type'sPublic IP address
Category name in the API: PUBLIC_IP_ADDRESS
A Compute Engine instance has a public IP address.
To reduce your organizations' attack surface, avoid assigning public IP addresses to your VMs. Stopped instances might still be flagged with a Public IP finding, for example, if the network interfaces are configured to assign an ephemeral public IP on start. Ensure the network configurations for stopped instances do not include external access.
For more information, see Securely connecting to VM instances.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
In the list of instances, check the box next to the name of the instance in the finding.
Click
Edit.For each interface under Network interfaces, click
Edit and set External IP to None.Click Done, and then click Save.
supported assets and scan settings.
Learn about this finding type'sPublic log bucket
Category name in the API: PUBLIC_LOG_BUCKET
This finding isn't available for project-level activations.
A storage bucket is public and used as a log sink, meaning that anyone on the internet can access logs stored in this bucket. allUsers represents anyone on the internet and allAuthenticatedUsers represents anyone who is logged into a Google service; neither is constrained to users within your organization.
For more information, see Overview of access control.
To remediate this finding, complete the following steps:
Go to the Cloud Storage browser page in the Google Cloud console.
In the list of buckets, click the name of the bucket indicated in the finding.
Click the Permissions tab.
Remove allUsers and allAuthenticatedUsers from the list of principals.
supported assets and scan settings.
Learn about this finding type'sPublic SQL instance
Category name in the API: PUBLIC_SQL_INSTANCE
Your SQL instance has 0.0.0.0/0
as an allowed network. This occurrence means
that any IPv4 client can pass the network firewall and make login attempts to
your instance, including clients you might not have intended to allow. Clients
still need valid credentials to successfully log in to your instance.
For more information, see Authorizing with authorized networks.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the navigation panel, click Connections.
Under Authorized networks, delete
0.0.0.0/0
and add specific IP addresses or IP ranges that you want to let connect to your instance.Click Done, and then click Save.
supported assets and scan settings.
Learn about this finding type'sPubsub CMEK disabled
Category name in the API: PUBSUB_CMEK_DISABLED
A Pub/Sub topic is not encrypted with customer-managed encryption keys (CMEK).
With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google uses to encrypt your data, giving you more control over access to your data.
To remediate this finding, delete the existing topic and create a new one:
Go to Pub/Sub's Topics page in the Google Cloud console.
If necessary, select the project containing the Pub/Sub topic.
Select the checkbox next to the topic listed in the finding, and then click
Delete.To create a new Pub/Sub topic with CMEK enabled, see Using customer-managed encryption keys. CMEK incurs additional costs related to Cloud KMS.
Publish findings or other data to the CMEK-enabled Pub/Sub topic.
supported assets and scan settings.
Learn about this finding type'sRoute not monitored
Category name in the API: ROUTE_NOT_MONITORED
Log metrics and alerts aren't configured to monitor VPC network route changes.
Google Cloud routes are destinations and hops that define the path that network traffic takes from a VM instance to a destination IP. By monitoring changes to route tables, you can help ensure that all VPC traffic flows through an expected path.
For more information, see Overview of logs-based metrics.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Create metric
Go to the Logs-based Metrics page in the Google Cloud console.
Click Create Metric.
Under Metric Type, select Counter.
Under Details:
- Set a Log metric name.
- Add a description.
- Set Units to 1.
Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:
resource.type="gce_route" AND (protoPayload.methodName:"compute.routes.delete" OR protoPayload.methodName:"compute.routes.insert")
Click Create Metric. You see a confirmation.
Create Alert Policy
-
In the Google Cloud console, go to the Log-based Metrics page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Under the User-defined metrics section, select the metric you created in the previous section.
-
Click More
, and then click Create alert from metric.The New condition dialog opens with the metric and data transformation options pre-populated.
- Click Next.
- Review the pre-populated settings. You might want to modify the Threshold value.
- Click Condition name and enter a name for the condition.
- Click Next.
To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.
To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.
- Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: Click Documentation, and then add any information that you want included in a notification message.
- Click Alert name and enter a name for the alerting policy.
- Click Create Policy.
supported assets and scan settings.
Learn about this finding type'sRedis role used on org
Category name in the API: REDIS_ROLE_USED_ON_ORG
This finding isn't available for project-level activations.
A Redis IAM role is assigned at the organization or folder level.
The following Redis IAM roles should be assigned per project only, not at the organization or folder level:
roles/redis.admin
roles/redis.viewer
roles/redis.editor
For more information, see Access control and permissions.
To remediate this finding, complete the following steps:
Go to the IAM policy page in the Google Cloud console.
Remove the Redis IAM roles indicated in the finding and add them on the individual projects instead.
supported assets and scan settings.
Learn about this finding type'sRelease channel disabled
Category name in the API: RELEASE_CHANNEL_DISABLED
A GKE cluster is not subscribed to a release channel.
Subscribe to a release channel to automate version upgrades to the GKE cluster. The features also reduces version management complexity to the number of features and level of stability required. For more information, see Release channels.
To remediate this finding, complete the following steps:
Go to the Kubernetes clusters page in the Google Cloud console.
In the Cluster basics section, click
Upgrade cluster master version in the Release channel row.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
In the dialog, select Release channel, and then choose the release channel you want to subscribe to.
If the control plane version of your cluster is not upgradeable to a release channel, that channel might be disabled as an option.
Click Save Changes.
supported assets and scan settings.
Learn about this finding type'sRSASHA1 for signing
Category name in the API: RSASHA1_FOR_SIGNING
RSASHA1 is used for key signing in Cloud DNS zones. The algorithm used for key signing should not be weak.
To remediate this finding, replace the algorithm with a recommended one by following the Using advanced signing options guide.
supported assets and scan settings.
Learn about this finding type'sService account key not rotated
Category name in the API: SERVICE_ACCOUNT_KEY_NOT_ROTATED
This finding isn't available for project-level activations.
A user-managed service account key hasn't been rotated for more than 90 days.
In general, user-managed service account keys should be rotated at least every 90 days, to ensure that data cannot be accessed with an old key that might have been lost, compromised, or stolen. For more information, see Rotate service account keys to reduce security risk caused by leaked keys.
If you generated the public/private key pair yourself, stored the private key in a hardware security module (HSM), and uploaded the public key to Google, then you might not need to rotate the key every 90 days. Instead, you can rotate the key if you believe that it might have been compromised.
To remediate this finding, complete the following steps:
Go to the Service Accounts page in the Google Cloud console.
If necessary, select the project indicated in the finding.
In the list of service accounts, find the service account listed in the finding and click
Delete. Before proceeding, consider the impact deleting a service account could have on your production resources.Create a new service account key to replace the old one. For more information, see Creating service account keys.
supported assets and scan settings.
Learn about this finding type'sService account role separation
Category name in the API: SERVICE_ACCOUNT_ROLE_SEPARATION
This finding isn't available for project-level activations.
One or more principals in your organization have multiple service account permissions assigned. No account should simultaneously have Service Account Admin along with other service account permissions. To learn about service accounts and the roles available to them, see Service accounts.
To remediate this finding, complete the following steps:
Go to the IAM page in the Google Cloud console.
For each principal listed in the finding, do the following:
- Check whether the role was inherited from a folder or organization resource by looking at the Inheritance column. If the column contains a link to a parent resource, click on the link to go to the parent resource's IAM page.
- Click Edit next to a principal.
- To remove permissions, click Delete next to Service Account Admin. If you want to remove all service account permissions, click Delete next to all other permissions.
Click Save.
supported assets and scan settings.
Learn about this finding type'sShielded VM disabled
Category name in the API: SHIELDED_VM_DISABLED
Shielded VM is disabled on this Compute Engine instance.
Shielded VM's are virtual machines (VMs) on Google Cloud hardened by a set of security controls that help defend against rootkits and bootkits. Shielded VM's help ensure that the boot loader and firmware are signed and verified. Learn more about Shielded VM.
To remediate this finding, complete the following steps:
Go to the VM instances page in the Google Cloud console.
Select the instance related to the Security Health Analytics finding.
On the Instance details page that loads, click
Stop.After the instance stops, click
Edit.In the Shielded VM section, toggle Turn on vTPM and Turn on Integrity Monitoring to enable Shielded VM.
Optionally, if you do not use any custom or unsigned drivers, then also enable Secure Boot.
Click Save. The new configuration appears on the Instance details page.
Click
Start to start the instance.
supported assets and scan settings.
Learn about this finding type'sSQL CMEK disabled
Category name in the API: SQL_CMEK_DISABLED
A SQL database instance is not using customer-managed encryption keys (CMEK).
With CMEK, keys that you create and manage in Cloud KMS wrap the keys that Google uses to encrypt your data, giving you more control over access to your data. For more information, see CMEK overviews for your product: Cloud SQL for MySQL, Cloud SQL for PostgreSQL, or Cloud SQL for SQL Server. CMEK incurs additional costs related to Cloud KMS.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Delete.To create a new instance with CMEK enabled, follow the instructions to configure CMEK for your product:
supported assets and scan settings.
Learn about this finding type'sSQL contained database authentication
Category name in the API: SQL_CONTAINED_DATABASE_AUTHENTICATION
A Cloud SQL for SQL Server database instance does not have the contained database authentication database flag set to Off.
The contained database authentication flag controls whether you can create or attach contained databases to the Database Engine. A contained database includes all database settings and metadata required to define the database and has no configuration dependencies on the instance of the Database Engine where the database is installed.
Enabling this flag is not recommended because of the following:
- Users can connect to the database without authentication at the Database Engine level.
- Isolating the database from the Database Engine makes it possible to move the database to another instance of SQL Server.
Contained databases face unique threats that should be understood and mitigated by SQL Server Database Engine administrators. Most threats result from the USER WITH PASSWORD authentication process, which moves the authentication boundary from the Database Engine level to the database level.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the contained database authentication database flag with the value Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL cross DB ownership chaining
Category name in the API: SQL_CROSS_DB_OWNERSHIP_CHAINING
A Cloud SQL for SQL Server database instance does not have the cross db ownership chaining database flag set to Off.
The cross db ownership chaining flag lets you control cross-database ownership chaining at the database level or allow cross-database ownership chaining for all database statements.
Enabling this flag is not recommended unless all databases hosted by the SQL Server instance participate in cross-database ownership chaining and you are aware of the security implications of this setting.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the cross db ownership chaining database flag with the value Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL external scripts enabled
Category name in the API: SQL_EXTERNAL_SCRIPTS_ENABLED
A Cloud SQL for SQL Server database instance does not have the external scripts enabled database flag set to Off.
When activated, this setting enables the execution of scripts with certain remote language extensions. Since this feature can adversely affect the security of the system, we recommend disabling it.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the Database flags section, set the external scripts enabled database flag with the value Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL instance not monitored
Category name in the API: SQL_INSTANCE_NOT_MONITORED
This finding isn't available for project-level activations.
Log metrics and alerts aren't configured to monitor Cloud SQL instance configuration changes.
Misconfiguration of SQL instance options can cause security risks. Disabling auto backup and high availability options could impact business continuity and not restricting authorized networks could increase exposure to untrusted networks. Monitoring changes to SQL instance configuration helps you reduce the time it takes to detect and correct misconfigurations.
For more information, see Overview of logs- based metrics.
Depending on the quantity of information, Cloud Monitoring costs can be significant. To understand your usage of the service and its costs, see Optimize cost: Cloud operations.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Create metric
Go to the Logs-based Metrics page in the Google Cloud console.
Click Create Metric.
Under Metric Type, select Counter.
Under Details:
- Set a Log metric name.
- Add a description.
- Set Units to 1.
Under Filter selection, copy and paste the following text into the Build filter box, replacing existing text, if necessary:
protoPayload.methodName="cloudsql.instances.update" OR protoPayload.methodName="cloudsql.instances.create" OR protoPayload.methodName="cloudsql.instances.delete"
Click Create Metric. You see a confirmation.
Create Alert Policy
-
In the Google Cloud console, go to the Log-based Metrics page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Under the User-defined metrics section, select the metric you created in the previous section.
-
Click More
, and then click Create alert from metric.The New condition dialog opens with the metric and data transformation options pre-populated.
- Click Next.
- Review the pre-populated settings. You might want to modify the Threshold value.
- Click Condition name and enter a name for the condition.
- Click Next.
To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.
To be notified when incidents are opened and closed, check Notify on incident closure. By default, notifications are sent only when incidents are opened.
- Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: Click Documentation, and then add any information that you want included in a notification message.
- Click Alert name and enter a name for the alerting policy.
- Click Create Policy.
supported assets and scan settings.
Learn about this finding type'sSQL local infile
Category name in the API: SQL_LOCAL_INFILE
A Cloud SQL for MySQL database instance does not have the local_infile database flag set to Off. Due to security issues associated with the local_infile flag, it should be disabled. For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the local_infile database flag with the value Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log checkpoints disabled
Category name in the API: SQL_LOG_CHECKPOINTS_DISABLED
A Cloud SQL for PostgreSQL database instance does not have the log_checkpoints database flag set to On.
Enabling log_checkpoints causes checkpoints and restart points to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_checkpoints database flag with the value On.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log connections disabled
Category name in the API: SQL_LOG_CONNECTIONS_DISABLED
A Cloud SQL for PostgreSQL database instance does not have the log_connections database flag set to On.
Enabling the log_connections setting causes attempted connections to the server to be logged, along with successful completion of client authentication. The logs can be useful in troubleshooting issues and confirming unusual connection attempts to the server.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_connections database flag with the value On.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log disconnections disabled
Category name in the API: SQL_LOG_DISCONNECTIONS_DISABLED
A Cloud SQL for PostgreSQL database instance does not have the log_disconnections database flag set to On.
Enabling the log_disconnections setting creates log entries at the end of each session. The logs are useful in troubleshooting issues and confirming unusual activity across a time period. For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_disconnections database flag with the value On.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log duration disabled
Category name in the API: SQL_LOG_DURATION_DISABLED
A Cloud SQL for PostgreSQL database instance does not have the log_duration database flag set to On.
When log_duration is enabled, this setting causes the execution time and duration of each completed statement to be logged. Monitoring the amount of time it takes to execute queries can be crucial in identifying slow queries and troubleshooting database issues.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_duration database flag to On.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log error verbosity
Category name in the API: SQL_LOG_ERROR_VERBOSITY
A Cloud SQL for PostgreSQL database instance does not have the log_error_verbosity database flag set to default or verbose.
The log_error_verbosity flag controls the amount of detail in messages logged. The greater the verbosity, the more details are recorded in messages. We recommend setting this flag to default or verbose.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the Database flags section, set the log_error_verbosity database flag to default or verbose.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log lock waits disabled
Category name in the API: SQL_LOG_LOCK_WAITS_DISABLED
A Cloud SQL for PostgreSQL database instance does not have the log_lock_waits database flag set to On.
Enabling the log_lock_waits setting creates log entries when session waits take longer than the deadlock_timeout time to acquire a lock. The logs are useful in determining whether lock waits are causing poor performance.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_lock_waits database flag with the value On.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log min duration statement enabled
Category name in the API: SQL_LOG_MIN_DURATION_STATEMENT_ENABLED
A Cloud SQL for PostgreSQL database instance does not have the log_min_duration_statement database flag set to -1.
The log_min_duration_statement flag causes SQL statements that run longer than a specified time to be logged. Consider disabling this setting because SQL statements might contain sensitive information that should not be logged. For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_min_duration_statement database flag with the value -1.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log min error statement
Category name in the API: SQL_LOG_MIN_ERROR_STATEMENT
A Cloud SQL for PostgreSQL database instance does not have the log_min_error_statement database flag set appropriately.
The log_min_error_statement flag controls whether SQL statements that cause error conditions are recorded in server logs. SQL statements of the specified severity or higher are logged with messages for the error statements. The greater the severity, the fewer messages are recorded.
If log_min_error_statement is not set to the correct value, messages might not be classified as error messages. A severity set too low might increase the number of messages and make it difficult to find actual errors. A severity set too high might cause error messages for actual errors to not be logged.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_min_error_statement database flag with one of the following recommended values, according to your organization's logging policy.
debug5
debug4
debug3
debug2
debug1
info
notice
warning
error
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log min error statement severity
Category name in the API: SQL_LOG_MIN_ERROR_STATEMENT_SEVERITY
A Cloud SQL for PostgreSQL database instance does not have the log_min_error_statement database flag set appropriately.
The log_min_error_statement flag controls whether SQL statements that cause error conditions are recorded in server logs. SQL statements of the specified severity or stricter are logged with messages for the error statements. The stricter the severity, the fewer messages are recorded.
If log_min_error_statement is not set to the correct value, messages might not be classified as error messages. A severity set too low would increase the number of messages and make it difficult to find actual errors. A severity level that is too high (too strict) might cause error messages for actual errors to not be logged.
We recommend setting this flag to error or stricter.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the Database flags section, set the log_min_error_statement database flag with one of the following recommended values, according to your organization's logging policy.
error
log
fatal
panic
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log min messages
Category name in the API: SQL_LOG_MIN_MESSAGES
A Cloud SQL for PostgreSQL database instance does not have the log_min_messages database flag set to at minimum warning.
The log_min_messages flag controls which message levels are recorded in server logs. The higher the severity, the fewer messages are recorded. Setting the threshold too low can result in increased log storage size and length, making it difficult to find actual errors.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_min_messages database flag with one of the following recommended values, according to your organization's logging policy.
debug5
debug4
debug3
debug2
debug1
info
notice
warning
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log executor stats enabled
Category name in the API: SQL_LOG_EXECUTOR_STATS_ENABLED
A Cloud SQL for PostgreSQL database instance does not have the log_executor_stats database flag set to Off.
When the log_executor_stats flag is activated, executor performance statistics are included in the PostgreSQL logs for each query. This setting can be useful for troubleshooting, but it can significantly increase the number of logs and performance overhead.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_executor_stats database flag to Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log hostname enabled
Category name in the API: `SQL_LOG_HOSTNAME_ENABLED
A Cloud SQL for PostgreSQL database instance does not have the log_hostname database flag set to Off.
When the log_hostname flag is activated, the hostname of the connecting host is logged. By default, connection log messages only show the IP address. This setting can be useful for troubleshooting. However, it can incur overhead on server performance, because for each statement logged, DNS resolution is required to convert an IP address to a hostname.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_hostname database flag to Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log parser stats enabled
Category name in the API: SQL_LOG_PARSER_STATS_ENABLED
A Cloud SQL for PostgreSQL database instance does not have the log_parser_stats database flag set to Off.
When the log_parser_stats flag is activated, parser performance statistics are included in the PostgreSQL logs for each query. This can be useful for troubleshooting, but it can significantly increase the number of logs and performance overhead.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_parser_stats database flag to Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log planner stats enabled
Category name in the API: SQL_LOG_PLANNER_STATS_ENABLED
A Cloud SQL for PostgreSQL database instance does not have the log_planner_stats database flag set to Off.
When the log_planner_stats flag is activated, a crude profiling method for logging PostgreSQL planner performance statistics is used. This can be useful for troubleshooting, but it can significantly increase the number of logs and performance overhead.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_planner_stats database flag to Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log statement
Category name in the API: SQL_LOG_STATEMENT
A Cloud SQL for PostgreSQL database instance does not have the
log_statement database flag set to ddl
.
The value of this flag controls which SQL statements are logged. Logging helps
troubleshoot operational problems and permits forensic analysis. If this flag
isn't set to the correct value, relevant information might be skipped or might
be hidden in too many messages. A value of ddl
(all data definition statements)
is recommended unless otherwise directed by your organization's logging policy.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_statement database flag to
ddl
.Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log statement stats enabled
Category name in the API: SQL_LOG_STATEMENT_STATS_ENABLED
A Cloud SQL for PostgreSQL database instance does not have the log_statement_stats database flag set to Off.
When the log_statement_stats flag is activated, end-to-end performance statistics are included in the PostgreSQL logs for each query. This setting can be useful for troubleshooting, but it can significantly increase the number of logs and performance overhead.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_statement_stats database flag to Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL log temp files
Category name in the API: SQL_LOG_TEMP_FILES
A Cloud SQL for PostgreSQL database instance does not have the log_temp_files database flag set to 0.
Temporary files can be created for sorts, hashes, and temporary query results. Setting the log_temp_files flag to 0 causes all temporary files information to be logged. Logging all temporary files is useful for identifying potential performance issues. For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.Under the Database flags section, set the log_temp_files database flag with the value 0.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL no root password
Category name in the API: SQL_NO_ROOT_PASSWORD
A MySQL database instance does not have a password set for the root account. You should add a password to the MySQL database instance. For more information, see MySQL users.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
On the Instance details page that loads, select the Users tab.
Next to the
root
user, click More , and then select Change Password.Enter a new, strong password, and then click OK.
supported assets and scan settings.
Learn about this finding type'sSQL public IP
Category name in the API: SQL_PUBLIC_IP
A Cloud SQL database has a public IP address.
To reduce your organization's attack surface, Cloud SQL databases should not have public IP addresses. Private IP addresses provide improved network security and lower latency for your application.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
On the left-side menu, click on Connections.
Click on the Networking tab and clear the Public IP check box.
If the instance is not already configured to use a private IP, see Configuring private IP for an existing instance.
Click Save.
supported assets and scan settings.
Learn about this finding type'sSQL remote access enabled
Category name in the API: SQL_REMOTE_ACCESS_ENABLED
A Cloud SQL for SQL Server database instance doesn't have the remote access database flag set to Off.
When activated, this setting grants permission to run local stored procedures from remote servers or remote stored procedures from the local server. This functionality can be abused to launch a Denial-of-Service (DoS) attack on remote servers by offloading query processing to a target. To prevent abuse, we recommend disabling this setting.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the Flags section, set remote access to Off.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL skip show database disabled
Category name in the API: SQL_SKIP_SHOW_DATABASE_DISABLED
A Cloud SQL for MySQL database instance does not have the skip_show_database database flag set to On.
When activated, this flag prevents users from using the SHOW DATABASES statement if they don't have the SHOW DATABASES privilege. With this setting, users without explicit permission aren't able to see databases that belong to other users. We recommend enabling this flag.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the Flags section, set skip_show_database to On.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL trace flag 3625
Category name in the API: SQL_TRACE_FLAG_3625
A Cloud SQL for SQL Server database instance doesn't have the 3625 (trace flag) database flag set to On.
This flag limits the amount of information returned to users who are not
members of the sysadmin fixed server role, by masking the parameters of some
error messages using asterisks (******
). To help prevent the
disclosure of sensitive information, we recommend enabling this flag.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the Database flags section, set 3625 to On.
Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL user connections configured
Category name in the API: SQL_USER_CONNECTIONS_CONFIGURED
A Cloud SQL for SQL Server database instance has the user connections database flag configured.
The user connections option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server. Because it's a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable. The default value is 0, which means that up to 32,767 user connections are allowed. For this reason, we don't recommend configuring the user connections database flag.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the Database flags section, next to user connections, click
Delete.Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL user options configured
Category name in the API: SQL_USER_OPTIONS_CONFIGURED
A Cloud SQL for SQL Server database instance has the user options database flag configured.
This setting overrides global default values of the SET options for all users. Since users and applications might assume the default database SET options are in use, setting the user options might cause unexpected results. For this reason, we don't recommend configuring the user options database flag.
For more information, see Configuring database flags.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
Click
Edit.In the Database flags section, next to user options, click
Delete.Click Save. The new configuration appears on the Instance overview page.
supported assets and scan settings.
Learn about this finding type'sSQL weak root password
Category name in the API: SQL_WEAK_ROOT_PASSWORD
A MySQL database instance has a weak password set for the root account. You should set a strong password for the instance. For more information, see MySQL users.
To remediate this finding, complete the following steps:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
On the Instance details page that loads, select the Users tab.
Next to the
root
user, click More , and then select Change Password.Enter a new, strong password, and then click OK.
supported assets and scan settings.
Learn about this finding type'sSSL not enforced
Category name in the API: SSL_NOT_ENFORCED
A Cloud SQL database instance doesn't require all incoming connections to use SSL.
To avoid leaking sensitive data in transit through unencrypted communications, all incoming connections to your SQL database instance should use SSL. Learn more about Configuring SSL/TLS.
To remediate this finding, allow only SSL connections for your SQL instances:
Go to the Cloud SQL Instances page in the Google Cloud console.
Select the instance listed in the Security Health Analytics finding.
On the Connections tab, click either Allow only SSL connections or Require trusted client certificates. For more information, see Enforce SSL/TLS encryption.
If you chose Require trusted client certificates, create a new client certificate. For more information, see Create a new client certificate.
supported assets and scan settings.
Learn about this finding type'sToo many KMS users
Category name in the API: TOO_MANY_KMS_USERS
Limit the number of principal users that can use cryptographic keys to three. The following predefined roles grant permissions to encrypt, decrypt, or sign data using cryptographic keys:
roles/owner
roles/cloudkms.cryptoKeyEncrypterDecrypter
roles/cloudkms.cryptoKeyEncrypter
roles/cloudkms.cryptoKeyDecrypter
roles/cloudkms.signer
roles/cloudkms.signerVerifier
For more information, see Permissions and roles.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To remediate this finding, complete the following steps:
Go to the Cloud KMS keys page in the Google Cloud console.
Click the name of the key ring indicated in the finding.
Click the name of the key indicated in the finding.
Select the box next to the primary version, and then click Show Info Panel.
Reduce the number of principals having permissions to encrypt, decrypt, or sign data to three or fewer. To revoke permissions, click
Delete next to each principal.
supported assets and scan settings.
Learn about this finding type'sUnconfirmed AppArmor profile
Category name in the API: GKE_APP_ARMOR
A container can be explicitly configured to be unconfined by AppArmor. This ensures that no AppArmor profile is applied to the container and is thus not constrained by a profile. The disabled preventive security control increases the risk of container escape.
To remediate this finding, apply the following steps to your affected workloads:
- Open the manifest for each affected workload.
- Set the following restricted fields to one of the allowed values.
Restricted fields
metadata.annotations["container.apparmor.security.beta.kubernetes.io/*"]
Allowed values
- false
User managed service account key
Category name in the API: USER_MANAGED_SERVICE_ACCOUNT_KEY
A user manages a service account key. Service account keys are a security risk if not managed correctly. You should choose a more secure alternative to service account keys whenever possible. If you must authenticate with a service account key, you are responsible for the security of the private key and for other operations described by Best practices for managing service account keys. If you are prevented from creating a service account key, service account key creation might be disabled for your organization. For more information, see Managing secure-by-default organization resources.
To remediate this finding, complete the following steps:
Go to the Service Accounts page in the Google Cloud console.
If necessary, select the project indicated in the finding.
Delete user-managed service account keys indicated in the finding, if they are not used by any application.
supported assets and scan settings.
Learn about this finding type'sWeak SSL policy
Category name in the API: WEAK_SSL_POLICY
A Compute Engine instance has a weak SSL policy or uses the Google Cloud default SSL policy with TLS version less than 1.2.
HTTPS and SSL Proxy load balancers use SSL policies to determine the protocol and cipher suites used in the TLS connections established between users and the internet. These connections encrypt sensitive data to prevent malicious eavesdroppers from accessing it. A weak SSL policy permits clients using outdated versions of TLS to connect with a less secure cipher suite or protocol. For a list of recommended and outdated cipher suites, visit the iana.org TLS parameters page.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
The remediation steps for this finding differ depending on whether this finding was triggered by the use of a default Google Cloud SSL policy or an SSL policy that allows a weak cipher suite or a minimum TLS version less than 1.2. Follow the procedure below that corresponds to the trigger of the finding.
Default Google Cloud SSL policy remediation
Go to the Target proxies page in the Google Cloud console.
Find the target proxy indicated in the finding and note forwarding rules in the In use by column.
To create a new SSL policy, see Using SSL policies. The policy should have a Minimum TLS version of 1.2 and a Modern or Restricted Profile.
To use a Custom profile , ensure the following cipher suites are disabled:
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA
Apply the SSL policy to each forwarding rule you previously noted.
Weak cipher suite or down-level TLS version allowed remediation
In the Google Cloud console, go to the SSL policies page .
Find the load balancer indicated in the In use by column.
Click under the name of the policy.
Click
Edit.Change Minimum TLS version to TLS 1.2 and Profile to Modern or Restricted.
To use a Custom profile, ensure the following cipher suites are disabled:
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA
Click Save.
supported assets and scan settings.
Learn about this finding type'sWeb UI enabled
Category name in the API: WEB_UI_ENABLED
The GKE web UI (dashboard) is enabled.
A highly privileged Kubernetes Service Accounts backs the Kubernetes web interface. If compromised, the service account can be abused. If you are already using the Google Cloud console, the Kubernetes web interface extends your attack surface unnecessarily. Learn about Disabling the Kubernetes web interface.
To remediate this finding, disable the Kubernetes web interface:
Go to the Kubernetes clusters page in the Google Cloud console.
Click the name of the cluster listed in the Security Health Analytics finding.
Click
Edit.If the cluster configuration recently changed, the edit button might be disabled. If you aren't able to edit the cluster settings, wait a few minutes and then try again.
Click Add-ons. The section expands to display available add-ons.
On the Kubernetes dashboard drop-down list, select Disabled.
Click Save.
supported assets and scan settings.
Learn about this finding type'sWorkload Identity disabled
Category name in the API: WORKLOAD_IDENTITY_DISABLED
Workload Identity is disabled on a GKE cluster.
Workload Identity is the recommended way to access Google Cloud services from within GKE because it offers improved security properties and manageability. Enabling it protects some potentially sensitive system metadata from user workloads running on your cluster. Learn about Metadata concealment.
To remediate this finding, follow the guide to Enable Workload Identity on a cluster.
supported assets and scan settings.
Learn about this finding type'sRemediate AWS misconfigurations
AWS Cloud Shell Full Access Restricted
Category name in the API: ACCESS_AWSCLOUDSHELLFULLACCESS_RESTRICTED
AWS CloudShell is a convenient way of running CLI commands against AWS services; a managed IAM policy ('AWSCloudShellFullAccess') provides full access to CloudShell, which allows file upload and download capability between a user's local system and the CloudShell environment. Within the CloudShell environment a user has sudo permissions, and can access the internet. So it is feasible to install file transfer software (for example) and move data from CloudShell to external internet servers.
Recommendation: Ensure access to AWSCloudShellFullAccess is restricted To remediate this finding, complete the following steps:AWS console
- Open the IAM console at https://console.aws.amazon.com/iam/
- In the left pane, select Policies
- Search for and select AWSCloudShellFullAccess
- On the Entities attached tab, for each item, check the box and select Detach
supported assets and scan settings.
Learn about this finding type'sAccess Keys Rotated Every 90 Days or Less
Category name in the API: ACCESS_KEYS_ROTATED_90_DAYS_LESS
Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated.
Recommendation: Ensure access keys are rotated every 90 days or less To remediate this finding, complete the following steps:AWS console
- Go to Management Console (https://console.aws.amazon.com/iam)
- Click on
Users
- Click on
Security Credentials
- As an Administrator
- Click onMake Inactive
for keys that have not been rotated in90
Days - As an IAM User
- Click onMake Inactive
orDelete
for keys which have not been rotated or used in90
Days - Click on
Create Access Key
- Update programmatic call with new Access Key credentials
AWS CLI
- While the first access key is still active, create a second access key, which is active by default. Run the following command:
aws iam create-access-key
At this point, the user has two active access keys.
- Update all applications and tools to use the new access key.
- Determine whether the first access key is still in use by using this command:
aws iam get-access-key-last-used
- One approach is to wait several days and then check the old access key for any use before proceeding.
Even if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command:
aws iam update-access-key
-
Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key.
-
After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command:
aws iam delete-access-key
supported assets and scan settings.
Learn about this finding type'sAll Expired Ssl Tls Certificates Stored Aws Iam Removed
Category name in the API: ALL_EXPIRED_SSL_TLS_CERTIFICATES_STORED_AWS_IAM_REMOVED
To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates.
Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console.
AWS console
Removing expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).
AWS CLI
To delete Expired Certificate run following command by replacing CERTIFICATE_NAME
with the name of the certificate to delete:
aws iam delete-server-certificate --server-certificate-name <CERTIFICATE_NAME>
When the preceding command is successful, it does not return any output.
supported assets and scan settings.
Learn about this finding type'sAutoscaling Group Elb Healthcheck Required
Category name in the API: AUTOSCALING_GROUP_ELB_HEALTHCHECK_REQUIRED
This checks whether your Auto Scaling groups that are associated with a load balancer are using Elastic Load Balancing health checks.
This ensures that the group can determine an instance's health based on additional tests provided by the load balancer. Using Elastic Load Balancing health checks can help support the availability of applications that use EC2 Auto Scaling groups.
Recommendation: Checks that all autoscaling groups assoc with a load balancer use healthchecks To remediate this finding, complete the following steps:AWS console
To enable Elastic Load Balancing health checks
- Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
- In the navigation pane, under Auto Scaling, choose Auto Scaling Groups.
- Select the check box for your group.
- Choose Edit.
- Under Health checks, for Health check type, choose ELB.
- For Health check grace period, enter 300.
- At the bottom of the page, choose Update.
For more information on using a load balancer with an Auto Scaling group, see the AWS Auto Scaling User Guide.
supported assets and scan settings.
Learn about this finding type'sAuto Minor Version Upgrade Feature Enabled Rds Instances
Category name in the API: AUTO_MINOR_VERSION_UPGRADE_FEATURE_ENABLED_RDS_INSTANCES
Ensure that RDS database instances have the Auto Minor Version Upgrade flag enabled in order to receive automatically minor engine upgrades during the specified maintenance window. So, RDS instances can get the new features, bug fixes, and security patches for their database engines.
Recommendation: Ensure Auto Minor Version Upgrade feature is Enabled for RDS Instances To remediate this finding, complete the following steps:AWS console
- Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.
- In the left navigation panel, click on
Databases
. - Select the RDS instance that wants to update.
- Click on the
Modify
button placed on the top right side. - On the
Modify DB Instance: <instance identifier>
page, In theMaintenance
section, selectAuto minor version upgrade
click on theYes
radio button. - At the bottom of the page click on
Continue
, check to Apply Immediately to apply the changes immediately, or selectApply during the next scheduled maintenance window
to avoid any downtime. - Review the changes and click on
Modify DB Instance
. The instance status should change from available to modifying and back to available. Once the feature is enabled, theAuto Minor Version Upgrade
status should change toYes
.
AWS CLI
- Run
describe-db-instances
command to list all RDS database instance names, available in the selected AWS region:
aws rds describe-db-instances --region <regionName> --query 'DBInstances[*].DBInstanceIdentifier'
- The command output should return each database instance identifier.
- Run the
modify-db-instance
command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove--apply-immediately
to apply changes during the next scheduled maintenance window and avoid any downtime:
aws rds modify-db-instance --region <regionName> --db-instance-identifier <dbInstanceIdentifier> --auto-minor-version-upgrade --apply-immediately
- The command output should reveal the new configuration metadata for the RDS instance and check
AutoMinorVersionUpgrade
parameter value. - Run
describe-db-instances
command to check if the Auto Minor Version Upgrade feature has been successfully enable:
aws rds describe-db-instances --region <regionName> --db-instance-identifier <dbInstanceIdentifier> --query 'DBInstances[*].AutoMinorVersionUpgrade'
- The command output should return the feature current status set to
true
, the feature isenabled
and the minor engine upgrades will be applied to the selected RDS instance.
supported assets and scan settings.
Learn about this finding type'sAws Config Enabled All Regions
Category name in the API: AWS_CONFIG_ENABLED_ALL_REGIONS
AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. It is recommended AWS Config be enabled in all regions.
Recommendation: Ensure AWS Config is enabled in all regions To remediate this finding, complete the following steps:AWS console
- Select the region you want to focus on in the top right of the console
- Click Services
- Click Config
- If a Config recorder is enabled in this region, you should navigate to the Settings page from the navigation menu on the left hand side. If a Config recorder is not yet enabled in this region then you should select "Get Started".
- Select "Record all resources supported in this region"
- Choose to include global resources (IAM resources)
- Specify an S3 bucket in the same account or in another managed AWS account
- Create an SNS Topic from the same AWS account or another managed AWS account
AWS CLI
- Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the AWS Config Service prerequisites.
- Run this command to create a new configuration recorder:
aws configservice put-configuration-recorder --configuration-recorder name=default,roleARN=arn:aws:iam::012345678912:role/myConfigRole --recording-group allSupported=true,includeGlobalResourceTypes=true
- Create a delivery channel configuration file locally which specifies the channel attributes, populated from the prerequisites set up previously:
{
"name": "default",
"s3BucketName": "my-config-bucket",
"snsTopicARN": "arn:aws:sns:us-east-1:012345678912:my-config-notice",
"configSnapshotDeliveryProperties": {
"deliveryFrequency": "Twelve_Hours"
}
}
- Run this command to create a new delivery channel, referencing the json configuration file made in the previous step:
aws configservice put-delivery-channel --delivery-channel file://deliveryChannel.json
- Start the configuration recorder by running the following command:
aws configservice start-configuration-recorder --configuration-recorder-name default
supported assets and scan settings.
Learn about this finding type'sAws Security Hub Enabled
Category name in the API: AWS_SECURITY_HUB_ENABLED
Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues. When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie. You can also enable integrations with AWS partner security products.
Recommendation: Ensure AWS Security Hub is enabled To remediate this finding, complete the following steps:AWS console
- Use the credentials of the IAM identity to sign in to the Security Hub console.
- When you open the Security Hub console for the first time, choose Enable AWS Security Hub.
- On the welcome page, Security standards list the security standards that Security Hub supports.
- Choose Enable Security Hub.
AWS CLI
- Run the enable-security-hub command. To enable the default standards, include
--enable-default-standards
.
aws securityhub enable-security-hub --enable-default-standards
- To enable the security hub without the default standards, include
--no-enable-default-standards
.
aws securityhub enable-security-hub --no-enable-default-standards
supported assets and scan settings.
Learn about this finding type'sCloudtrail Logs Encrypted Rest Using Kms Cmks
Category name in the API: CLOUDTRAIL_LOGS_ENCRYPTED_REST_USING_KMS_CMKS
AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS.
Recommendation: Ensure CloudTrail logs are encrypted at rest using KMS CMKs To remediate this finding, complete the following steps:AWS console
- Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail
- In the left navigation pane, choose
Trails
. - Click on a Trail
- Under the
S3
section click on the edit button (pencil icon) - Click
Advanced
- Select an existing CMK from the
KMS key Id
drop-down menu
- Note: Ensure the CMK is located in the same region as the S3 bucket
- Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided here for editing the selected CMK Key policy - Click
Save
- You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files.
- Click
Yes
AWS CLI
aws cloudtrail update-trail --name <trail_name> --kms-id <cloudtrail_kms_key>
aws kms put-key-policy --key-id <cloudtrail_kms_key> --policy <cloudtrail_kms_key_policy>
supported assets and scan settings.
Learn about this finding type'sCloudtrail Log File Validation Enabled
Category name in the API: CLOUDTRAIL_LOG_FILE_VALIDATION_ENABLED
CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails.
Recommendation: Ensure CloudTrail log file validation is enabled To remediate this finding, complete the following steps:AWS console
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail
- Click on
Trails
on the left navigation pane - Click on target trail
- Within the
General details
section clickedit
- Under the
Advanced settings
section - Check the enable box under
Log file validation
- Click
Save changes
AWS CLI
aws cloudtrail update-trail --name <trail_name> --enable-log-file-validation
Note that periodic validation of logs using these digests can be performed by running the following command:
aws cloudtrail validate-logs --trail-arn <trail_arn> --start-time <start_time> --end-time <end_time>
supported assets and scan settings.
Learn about this finding type'sCloudtrail Trails Integrated Cloudwatch Logs
Category name in the API: CLOUDTRAIL_TRAILS_INTEGRATED_CLOUDWATCH_LOGS
AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, real time analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs.
Note: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. CloudWatch Logs is a native way to accomplish this using AWS services but does not preclude the use of an alternate solution.
Recommendation: Ensure CloudTrail trails are integrated with CloudWatch Logs To remediate this finding, complete the following steps:AWS console
- Login to the CloudTrail console at
https://console.aws.amazon.com/cloudtrail/
- Select the
Trail
the needs to be updated. - Scroll down to
CloudWatch Logs
- Click
Edit
- Under
CloudWatch Logs
click the boxEnabled
- Under
Log Group
pick new or select an existing log group - Edit the
Log group name
to match the CloudTrail or pick the existing CloudWatch Group. - Under
IAM Role
pick new or select an existing. - Edit the
Role name
to match the CloudTrail or pick the existing IAM Role. - Click `Save changes.
AWS CLI
aws cloudtrail update-trail --name <trail_name> --cloudwatch-logs-log-group-arn <cloudtrail_log_group_arn> --cloudwatch-logs-role-arn <cloudtrail_cloudwatchLogs_role_arn>
supported assets and scan settings.
Learn about this finding type'sCloudwatch Alarm Action Check
Category name in the API: CLOUDWATCH_ALARM_ACTION_CHECK
This checks whether Amazon Cloudwatch has actions defined when an alarm transitions between the states 'OK', 'ALARM' and 'INSUFFICIENT_DATA'.
Configuring actions for the ALARM state in Amazon CloudWatch alarms is very important to trigger an immediate response when monitored metrics breach thresholds.
It ensures quick problem resolution, reduces downtime and enables automated remediation, maintaining system health and preventing outages.
Alarms have at least one action.
Alarms have at least one action when the alarm transitions to the 'INSUFFICIENT_DATA' state from any other state.
(Optional) Alarms have at least one action when the alarm transitions to an 'OK' state from any other state.
AWS console
To configure ALARM actions for your Amazon CloudWatch alarms, do the following.
- Open the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
- In the navigation pane, under 'Alarms', select 'All alarms'.
- Choose the Amazon CloudWatch alarm you want to modify, choose 'Actions' and select 'Edit'.
- From the left choose the 'Step 2 - optional Configure actions'
- For the 'Alarm state trigger' select the 'In alarm' option to setup an ALARM-based action.
- To send a notification to a newly created SNS topic, select 'Create new topic'.
- In the 'Create new topic...' box specify a unique SNS topic name.
- In the 'Email endpoints that will receive the notification…' box specify one or more email addresses.
- Then select 'Create Topic' to create the required Amazon SNS Topic.
- At the bottom right select 'Next', 'Next' and choose 'Update alarm' to apply the changes.
- Open your email client and in the mail from AWS Notifications, click on the link to confirm your subscription to the SNS topic in question.
- Repeat steps 4 - 11 and during step 5, choosing the 'OK' and 'Insufficient data' for the 'Alarm state trigger' to setup actions for those two states.
- Repeat the process for all other CloudWatch alarms within the same AWS region.
- Repeat the process for all other CloudWatch alarms in all other AWS regions.
supported assets and scan settings.
Learn about this finding type'sCloudwatch Log Group Encrypted
Category name in the API: CLOUDWATCH_LOG_GROUP_ENCRYPTED
This check ensures CloudWatch logs are configured with KMS.
Log group data is always encrypted in CloudWatch Logs. By default, CloudWatch Logs uses server-side encryption for the log data at rest. As an alternative, you can use AWS Key Management Service for this encryption. If you do, the encryption is done using an AWS KMS key. Encryption using AWS KMS is enabled at the log group level, by associating a KMS key with a log group, either when you create the log group or after it exists.
Recommendation: Checks that all log groups in Amazon CloudWatch Logs are encrypted with KMSFor more information see Encrypt log data in CloudWatch Logs using AWS Key Management Service in the Amazon CloudWatch user guide.
supported assets and scan settings.
Learn about this finding type'sCloudTrail CloudWatch Logs Enabled
Category name in the API: CLOUD_TRAIL_CLOUD_WATCH_LOGS_ENABLED
This control checks whether CloudTrail trails are configured to send logs to CloudWatch Logs. The control fails if the CloudWatchLogsLogGroupArn property of the trail is empty.
CloudTrail records AWS API calls that are made in a given account. The recorded information includes the following:
- The identity of the API caller
- The time of the API call
- The source IP address of the API caller
- The request parameters
- The response elements returned by the AWS service
CloudTrail uses Amazon S3 for log file storage and delivery. You can capture CloudTrail logs in a specified S3 bucket for long-term analysis. To perform real-time analysis, you can configure CloudTrail to send logs to CloudWatch Logs.
For a trail that is enabled in all Regions in an account, CloudTrail sends log files from all of those Regions to a CloudWatch Logs log group.
Security Hub recommends that you send CloudTrail logs to CloudWatch Logs. Note that this recommendation is intended to ensure that account activity is captured, monitored, and appropriately alarmed on. You can use CloudWatch Logs to set this up with your AWS services. This recommendation does not preclude the use of a different solution.
Sending CloudTrail logs to CloudWatch Logs facilitates real-time and historic activity logging based on user, API, resource, and IP address. You can use this approach to establish alarms and notifications for anomalous or sensitivity account activity.
Recommendation: Checks that all CloudTrail trails are configured to send logs to AWS CloudWatchTo integrate CloudTrail with CloudWatch Logs, see Sending events to CloudWatch Logs in the AWS CloudTrail User Guide.
supported assets and scan settings.
Learn about this finding type'sNo AWS Credentials in CodeBuild Project Environment Variables
Category name in the API: CODEBUILD_PROJECT_ENVVAR_AWSCRED_CHECK
This checks whether the project contains the environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
.
Authentication credentials AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
should never be stored in clear text, as this could lead to unintended data exposure and unauthorized access.
To remove environment variables from a CodeBuild project, see Change a build project's settings in AWS CodeBuild in the AWS CodeBuild User Guide. Ensure nothing is selected for Environment variables.
You can store environment variables with sensitive values in the AWS Systems Manager Parameter Store or AWS Secrets Manager and then retrieve them from your build spec. For instructions, see the box labeled Important in the Environment section in the AWS CodeBuild User Guide.
supported assets and scan settings.
Learn about this finding type'sCodebuild Project Source Repo Url Check
Category name in the API: CODEBUILD_PROJECT_SOURCE_REPO_URL_CHECK
This checks whether an AWS CodeBuild project Bitbucket source repository URL contains personal access tokens or a user name and password. The control fails if the Bitbucket source repository URL contains personal access tokens or a user name and password.
Sign-in credentials shouldn't be stored or transmitted in clear text or appear in the source repository URL. Instead of personal access tokens or sign-in credentials, you should access your source provider in CodeBuild, and change your source repository URL to contain only the path to the Bitbucket repository location. Using personal access tokens or sign-in credentials could result in unintended data exposure or unauthorized access.
Recommendation: Checks that all projects using github or bitbucket as the source use oauthYou can update your CodeBuild project to use OAuth.
To remove basic authentication / (GitHub) Personal Access Token from CodeBuild project source
- Open the CodeBuild console at https://console.aws.amazon.com/codebuild/.
- Choose the build project that contains personal access tokens or a user name and password.
- From Edit, choose Source.
- Choose Disconnect from GitHub / Bitbucket.
- Choose Connect using OAuth, then choose Connect to GitHub / Bitbucket.
- When prompted, choose authorize as appropriate.
- Reconfigure your repository URL and additional configuration settings, as needed.
- Choose Update source.
For more information, refer to CodeBuild use case-based samples in the AWS CodeBuild User Guide.
supported assets and scan settings.
Learn about this finding type'sCredentials Unused 45 Days Greater Disabled
Category name in the API: CREDENTIALS_UNUSED_45_DAYS_GREATER_DISABLED
AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. It is recommended that all credentials that have been unused in 45 or greater days be deactivated or removed.
Recommendation: Ensure credentials unused for 45 days or greater are disabled To remediate this finding, complete the following steps:AWS console
Perform the following to manage Unused Password (IAM user console access)
- Login to the AWS Management Console:
- Click
Services
- Click
IAM
- Click on
Users
- Click on
Security Credentials
- Select user whose
Console last sign-in
is greater than 45 days - Click
Security credentials
- In section
Sign-in credentials
,Console password
clickManage
- Under Console Access select
Disable
10.ClickApply
Perform the following to deactivate Access Keys:
- Login to the AWS Management Console:
- Click
Services
- Click
IAM
- Click on
Users
- Click on
Security Credentials
- Select any access keys that are over 45 days old and that have been used and
- Click onMake Inactive
- Select any access keys that are over 45 days old and that have not been used and
- Click the X toDelete
supported assets and scan settings.
Learn about this finding type'sDefault Security Group Vpc Restricts All Traffic
Category name in the API: DEFAULT_SECURITY_GROUP_VPC_RESTRICTS_ALL_TRAFFIC
A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic.
The default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation.
NOTE: When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups.
Recommendation: Ensure the default security group of every VPC restricts all trafficSecurity Group Members
Perform the following to implement the prescribed state:
- Identify AWS resources that exist within the default security group
- Create a set of least privilege security groups for those resources
- Place the resources in those security groups
- Remove the resources noted in #1 from the default security group
Security Group State
- Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home
- Repeat the next steps for all VPCs - including the default VPC in each AWS region:
- In the left pane, click
Security Groups
- For each default security group, perform the following:
- Select the
default
security group - Click the
Inbound Rules
tab - Remove any inbound rules
- Click the
Outbound Rules
tab - Remove any Outbound rules
Recommended:
IAM groups allow you to edit the "name" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to "DO NOT USE. DO NOT ADD RULES"
supported assets and scan settings.
Learn about this finding type'sDms Replication Not Public
Category name in the API: DMS_REPLICATION_NOT_PUBLIC
Checks whether AWS DMS replication instances are public. To do this, it examines the value of the PubliclyAccessible
field.
A private replication instance has a private IP address that you cannot access outside of the replication network. A replication instance should have a private IP address when the source and target databases are in the same network. The network must also be connected to the replication instance's VPC using a VPN, AWS Direct Connect, or VPC peering. To learn more about public and private replication instances, see Public and private replication instances in the AWS Database Migration Service User Guide.
You should also ensure that access to your AWS DMS instance configuration is limited to only authorized users. To do this, restrict users' IAM permissions to modify AWS DMS settings and resources.
Recommendation: Checks whether AWS Database Migration Service replication instances are publicYou can't change the public access setting for a DMS replication instance after creating it. To change the public access setting, delete your current instance, and then recreate it. Don't select the Publicly accessible option.
supported assets and scan settings.
Learn about this finding type'sDo Setup Access Keys During Initial User Setup All Iam Users Console
Category name in the API: DO_SETUP_ACCESS_KEYS_DURING_INITIAL_USER_SETUP_ALL_IAM_USERS_CONSOLE
AWS console defaults to no check boxes selected when creating a new IAM user. When creating the IAM User credentials you have to determine what type of access they require.
Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user.
AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.
Recommendation: Do not setup access keys during initial user setup for all IAM users that have a console password To remediate this finding, complete the following steps:AWS console
- Login to the AWS Management Console:
- Click
Services
- Click
IAM
- Click on
Users
- Click on
Security Credentials
- As an Administrator
- Click on the X(Delete)
for keys that were created at the same time as the user profile but have not been used. - As an IAM User
- Click on the X(Delete)
for keys that were created at the same time as the user profile but have not been used.
AWS CLI
aws iam delete-access-key --access-key-id <access-key-id-listed> --user-name <users-name>
supported assets and scan settings.
Learn about this finding type'sDynamodb Autoscaling Enabled
Category name in the API: DYNAMODB_AUTOSCALING_ENABLED
This checks whether an Amazon DynamoDB table can scale its read and write capacity as needed. This control passes if the table uses either on-demand capacity mode or provisioned mode with auto scaling configured. Scaling capacity with demand avoids throttling exceptions, which helps to maintain availability of your applications.
DynamoDB tables in on-demand capacity mode are only limited by the DynamoDB throughput default table quotas. To raise these quotas, you can file a support ticket through AWS Support.
DynamoDB tables in provisioned mode with auto scaling adjust the provisioned throughput capacity dynamically in response to traffic patterns. For additional information on DynamoDB request throttling, see Request throttling and burst capacity in the Amazon DynamoDB Developer Guide.
Recommendation: DynamoDB tables should automatically scale capacity with demandFor detailed instructions on enabling DynamoDB automatic scaling on existing tables in capacity mode, see Enabling DynamoDB auto scaling on existing tables in the Amazon DynamoDB Developer Guide.
supported assets and scan settings.
Learn about this finding type'sDynamodb In Backup Plan
Category name in the API: DYNAMODB_IN_BACKUP_PLAN
This control evaluates whether a DynamoDB table is covered by a backup plan. The control fails if a DynamoDB table isn't covered by a backup plan. This control only evaluates DynamoDB tables that are in the ACTIVE state.
Backups help you recover more quickly from a security incident. They also strengthen the resilience of your systems. Including DynamoDB tables in a backup plan helps you protect your data from unintended loss or deletion.
Recommendation: DynamoDB tables should be covered by a backup planTo add a DynamoDB table to an AWS Backup backup plan, see Assigning resources to a backup plan in the AWS Backup Developer Guide.
supported assets and scan settings.
Learn about this finding type'sDynamodb Pitr Enabled
Category name in the API: DYNAMODB_PITR_ENABLED
Point In Time Recovery (PITR) is one of the mechanisms available to backup DynamoDB tables.
A point in time backup is kept for 35 days. In case your requirement is for longer retention, please see Set up scheduled backups for Amazon DynamoDB using AWS Backup in the AWS Documentation.
Recommendation: Checks that point in time recovery (PITR) is enabled for all AWS DynamoDB tables To remediate this finding, complete the following steps:Terraform
In order to set PITR for DynamoDB tables, set the point_in_time_recovery
block:
resource "aws_dynamodb_table" "example" {
# ... other configuration ...
point_in_time_recovery {
enabled = true
}
}
AWS console
To enable DynamoDB point-in-time recovery for an existing table
- Open the DynamoDB console at https://console.aws.amazon.com/dynamodb/.
- Choose the table that you want to work with, and then choose Backups.
- In the Point-in-time Recovery section, under Status, choose Enable.
- Choose Enable again to confirm the change.
AWS CLI
aws dynamodb update-continuous-backups \
--table-name "GameScoresOnDemand" \
--point-in-time-recovery-specification "PointInTimeRecoveryEnabled=true"
supported assets and scan settings.
Learn about this finding type'sDynamodb Table Encrypted Kms
Category name in the API: DYNAMODB_TABLE_ENCRYPTED_KMS
Checks whether all DynamoDB tables are encrypted with a customer managed KMS key (non-default).
Recommendation: Checks that all DynamoDB tables are encrypted with AWS Key Management Service (KMS) To remediate this finding, complete the following steps:Terraform
To remediate this control, create an AWS KMS Key and use it to encrypt the violating DynamoDB resource.
resource "aws_kms_key" "dynamodb_encryption" {
description = "Used for DynamoDB encryption configuration"
enable_key_rotation = true
}
resource "aws_dynamodb_table" "example" {
# ... other configuration ...
server_side_encryption {
enabled = true
kms_key_arn = aws_kms_key.dynamodb_encryption.arn
}
}
AWS console
Assuming there is an existing AWS KMS key available to encrypt DynamoDB.
To change a DynamoDB table encryption to a customer managed and owned KMS key.
- Open the DynamoDB console at https://console.aws.amazon.com/dynamodb/.
- Choose the table that you want to work with, and then choose Additional settings.
- Under Encryption, choose Manage encryption.
- For Encryption at rest, choose Stored in your account, and owned and managed by you.
- Select the AWS Key to use. Save changes.
AWS CLI
aws dynamodb update-table \
--table-name <value> \
--sse-specification "Enabled=true,SSEType=KMS,KMSMasterKeyId=<kms_key_arn>"
supported assets and scan settings.
Learn about this finding type'sEbs Optimized Instance
Category name in the API: EBS_OPTIMIZED_INSTANCE
Checks whether EBS optimization is enabled for your EC2 instances that can be EBS-optimized
Recommendation: Checks that EBS optimization is enabled for all instances that support EBS optimizationTo configure EBS optimized instance settings, see Amazon EBS–optimized instances in the Amazon EC2 user guide.
supported assets and scan settings.
Learn about this finding type'sEbs Snapshot Public Restorable Check
Category name in the API: EBS_SNAPSHOT_PUBLIC_RESTORABLE_CHECK
Checks whether Amazon Elastic Block Store snapshots are not public. The control fails if Amazon EBS snapshots are restorable by anyone.
EBS snapshots are used to back up the data on your EBS volumes to Amazon S3 at a specific point in time. You can use the snapshots to restore previous states of EBS volumes. It is rarely acceptable to share a snapshot with the public. Typically the decision to share a snapshot publicly was made in error or without a complete understanding of the implications. This check helps ensure that all such sharing was fully planned and intentional.
Recommendation: Amazon EBS snapshots should not be publicly restorable To remediate this finding, complete the following steps:AWS console
To remediate this issue, update your EBS snapshot to make it private instead of public.
To make a public EBS snapshot private:
- Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
- In the navigation pane, under Elastic Block Store, choose Snapshots menu and then choose your public snapshot.
- From Actions, choose Modify permissions.
- Choose Private.
- (Optional) Add the AWS account numbers of the authorized accounts to share your snapshot with and choose Add Permission.
- Choose Save.
supported assets and scan settings.
Learn about this finding type'sEbs Volume Encryption Enabled All Regions
Category name in the API: EBS_VOLUME_ENCRYPTION_ENABLED_ALL_REGIONS
Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store (EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported.
Recommendation: Ensure EBS Volume Encryption is Enabled in all Regions To remediate this finding, complete the following steps:AWS console
- Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/
- Under
Account attributes
, clickEBS encryption
. - Click
Manage
. - Click the
Enable
checkbox. - Click
Update EBS encryption
- Repeat for every region requiring the change.
Note: EBS volume encryption is configured per region.
AWS CLI
- Run
aws --region <region> ec2 enable-ebs-encryption-by-default
- Verify that
"EbsEncryptionByDefault": true
is displayed. - Repeat every region requiring the change.
Note: EBS volume encryption is configured per region.
supported assets and scan settings.
Learn about this finding type'sEc2 Instances In Vpc
Category name in the API: EC2_INSTANCES_IN_VPC
Amazon VPC provides more security functionality than EC2 Classic. It is recommended that all nodes belong to an Amazon VPC.
Recommendation: Ensures that all instances belong to a VPC To remediate this finding, complete the following steps:Terraform
If you have EC2 Classic Instances defined in Terraform, you may modify your resources to be part of a VPC. This migration will depend on an architecture that best suits your needs. The following is a simple Terraform example that illustrates a publicly exposed EC2 in a VPC.
resource "aws_vpc" "example_vpc" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "example_public_subnet" {
vpc_id = aws_vpc.example_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "1a"
}
resource "aws_internet_gateway" "example_igw" {
vpc_id = aws_vpc.example_vpc.id
}
resource "aws_key_pair" "example_key" {
key_name = "web-instance-key"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email@example.com"
}
resource "aws_security_group" "web_sg" {
name = "http and ssh"
vpc_id = aws_vpc.some_custom_vpc.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "web" {
ami = <ami_id>
instance_type = <instance_flavor>
key_name = aws_key_pair.example_key.name
monitoring = true
subnet_id = aws_subnet.example_public_subnet.id
vpc_security_group_ids = [aws_security_group.web_sg.id]
metadata_options {
http_tokens = "required"
}
}
AWS console
In order to migrate your EC2 Classic to VPC, see Migrate from EC2-Classic to a VPC
AWS CLI
This AWS CLI example illustrates the same infrastructure defined with Terraform. It's a simple example of a publicly exposed EC2 instance in a VPC
Create VPC
aws ec2 create-vpc \
--cidr-block 10.0.0.0/16
Create Public Subnet
aws ec2 create-subnet \
--availability-zone 1a \
--cidr-block 10.0.1.0/24 \
--vpc-id <id_from_create-vpc_command>
Create Internet Gateway
aws ec2 create-internet-gateway
Attach Internet Gateway to VPC
aws ec2 attach-internet-gateway \
--internet-gateway-id <id_from_create-internet-gateway_command> \
--vpc-id <id_from_create-vpc_command>
Create Key Pair - This will save your private key in /.ssh/web-instance-key.pem
aws ec2 create-key-pair \
--key-name web-instance-key \
--query "KeyMaterial" \
--output text > ~/.ssh/web-instance-key.pem && \
chmod 400 ~/.ssh/web-instance-key.pem
Create Security Group
aws ec2 create-security-group \
--group-name "http and ssh" \
--vpc-id <id_from_create-vpc_command>
Create Security Group Rules - For more restricted access, define a more restricted CIDR for SSH on port 22
aws ec2 authorize-security-group-ingress \
--group-id <id_from_create-security-group_command>
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-id <id_from_create-security-group_command>
--protocol tcp \
--port 22 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-egress \
--group-id <id_from_create-security-group_command>
--protocol -1 \
--port 0 \
--cidr 0.0.0.0/0
Create EC2 Instance
aws ec2 run-instances \
--image-id <ami_id> \
--instance-type <instance_flavor> \
--metadata-options "HttpEndpoint=enabled,HttpTokens=required" \
--monitoring true \
--key-name web-instance-key \
--subnet-id <id_from_create-subnet_command> \
--security-group-ids <id_from_create-security-group_command>
supported assets and scan settings.
Learn about this finding type'sEc2 Instance No Public Ip
Category name in the API: EC2_INSTANCE_NO_PUBLIC_IP
EC2 instances that have a public IP address are at an increased risk of compromise. It is recommended that EC2 instances not be configured with a public IP address.
Recommendation: Ensures no instances have a public IP To remediate this finding, complete the following steps:Terraform
Use the associate_public_ip_address = false
argument with the aws_instance
resource to ensure EC2 instances are provisioned without a public IP address
resource "aws_instance" "no_public_ip" {
...
associate_public_ip_address = false
}
AWS console
By default, nondefault subnets have the IPv4 public addressing attribute set to false, and default subnets have this attribute set to true. An exception is a nondefault subnet created by the Amazon EC2 launch instance wizard — the wizard sets the attribute to true. You can modify this attribute using the Amazon VPC console.
To modify your subnet's public IPv4 addressing behavior
- Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
- In the navigation pane, choose Subnets.
- Select your subnet and choose Actions, Edit subnet settings.
- The Enable auto-assign public IPv4 address check box, if selected, requests a public IPv4 address for all instances launched into the selected subnet. Select or clear the check box as required, and then choose Save.
AWS CLI
The following command runs an EC2 Instance in a default subnet without associating a public IP address to it.
aws ec2 run-instances \
--image-id <ami_id> \
--instance-type <instance_flavor> \
--no-associate-public-ip-address \
--key-name MyKeyPair
supported assets and scan settings.
Learn about this finding type'sEc2 Managedinstance Association Compliance Status Check
Category name in the API: EC2_MANAGEDINSTANCE_ASSOCIATION_COMPLIANCE_STATUS_CHECK
A State Manager association is a configuration that is assigned to your managed instances. The configuration defines the state that you want to maintain on your instances. For example, an association can specify that antivirus software must be installed and running on your instances, or that certain ports must be closed. EC2 instances that have an association with AWS Systems Manager are under management of Systems Manager which makes it easier to apply patches, fix misconfigurations, and respond to security events.
Recommendation: Checks the compliance status AWS systems manager association To remediate this finding, complete the following steps:Terraform
The following example demonstrates how to create a simple EC2 instance, an AWS Systems Manager (SSM) Document and an association between SSM and the EC2 instance. Documents supported are of type Command
and Policy
.
resource "aws_instance" "web" {
ami = "<iam_id>"
instance_type = "<instance_flavor>"
}
resource "aws_ssm_document" "check_ip" {
name = "check-ip-config"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.2",
"description": "Check ip configuration of a Linux instance.",
"parameters": {
},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand": ["ifconfig"]
}
]
}
}
}
DOC
}
resource "aws_ssm_association" "check_ip_association" {
name = aws_ssm_document.check_ip.name
targets {
key = "InstanceIds"
values = [aws_instance.web.id]
}
}
AWS console
For information on configuring associations with AWS Systems Manager using the console, see Creating Associations in the AWS Systems Manager documentation.
AWS CLI
Create an SSM Document
aws ssm create-document \
--name <document_name> \
--content file://path/to-file/document.json \
--document-type "Command"
Create an SSM Association
aws ssm create-association \
--name <association_name> \
--targets "Key=InstanceIds,Values=<instance-id-1>,<instance-id-2>"
supported assets and scan settings.
Learn about this finding type'sEc2 Managedinstance Patch Compliance Status Check
Category name in the API: EC2_MANAGEDINSTANCE_PATCH_COMPLIANCE_STATUS_CHECK
This control checks whether the status of the AWS Systems Manager association compliance is COMPLIANT or NON_COMPLIANT after the association is run on an instance. The control fails if the association compliance status is NON_COMPLIANT.
A State Manager association is a configuration that is assigned to your managed instances. The configuration defines the state that you want to maintain on your instances. For example, an association can specify that antivirus software must be installed and running on your instances or that certain ports must be closed.
After you create one or more State Manager associations, compliance status information is immediately available to you. You can view the compliance status in the console or in response to AWS CLI commands or corresponding Systems Manager API actions. For associations, Configuration Compliance shows the compliance status (Compliant or Non-compliant). It also shows the severity level assigned to the association, such as Critical or Medium.
To learn more about State Manager association compliance, see About State Manager association compliance in the AWS Systems Manager User Guide.
Recommendation: Checks the status of AWS Systems Manager patch complianceA failed association can be related to different things, including targets and SSM document names. To remediate this issue, you must first identify and investigate the association by viewing association history. For instructions on viewing association history, see Viewing association histories in the AWS Systems Manager User Guide.
After investigating, you can edit the association to correct the identified issue. You can edit an association to specify a new name, schedule, severity level, or targets. After you edit an association, AWS Systems Manager creates a new version. For instructions on editing an association, see Editing and creating a new version of an association in the AWS Systems Manager User Guide.
supported assets and scan settings.
Learn about this finding type'sEc2 Metadata Service Allows Imdsv2
Category name in the API: EC2_METADATA_SERVICE_ALLOWS_IMDSV2
When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method).
Recommendation: Ensure that EC2 Metadata Service only allows IMDSv2From Console:
1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/
2. Under the Instances menu, select Instances.
3. For each Instance, select the instance, then choose Actions > Modify instance metadata options.
4. If the Instance metadata service is enabled, set IMDSv2 to Required.
From Command Line:
aws ec2 modify-instance-metadata-options --instance-id <instance_id> --http-tokens required
supported assets and scan settings.
Learn about this finding type'sEc2 Volume Inuse Check
Category name in the API: EC2_VOLUME_INUSE_CHECK
Identifying and removing unattached (unused) Elastic Block Store (EBS) volumes in your AWS account in order to lower the cost of your monthly AWS bill. Deleting unused EBS volumes also reduces the risk of confidential/sensitive data leaving your premise. Additionally, this control also checks whether EC2 instances archived configured to delete volumes on termination.
By default, EC2 instances are configured to delete the data in any EBS volumes associated with the instance, and to delete the root EBS volume of the instance. However, any non-root EBS volumes attached to the instance, at launch or during execution, get persisted after termination by default.
Recommendation: Checks whether EBS volumes are attached to EC2 instances and configured for deletion on instance termination To remediate this finding, complete the following steps:Terraform
In order to prevent this scenario using Terraform, create EC2 instances with embedded EBS blocks. This ensures that any EBS blocks associated with the instance (not only the root) will be deleted on instance termination by having the attribute ebs_block_device.delete_on_termination
defaulted to true
.
resource "aws_instance" "web" {
ami = <ami_id>
instance_type = <instance_flavor>
ebs_block_device {
delete_on_termination = true # Default
device_name = "/dev/sdh"
}
AWS console
To delete an EBS volume using the console
- Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
- In the navigation pane, choose Volumes.
- Select the volume to delete and choose Actions, Delete volume.
- Note: If Delete volume is greyed out, the volume is attached to an instance. You must detach the volume from the instance before it can be deleted.
- In the confirmation dialog box, choose Delete.
AWS CLI
This example command deletes an available volume with the volume ID of vol-049df61146c4d7901. If the command succeeds, no output is returned.
aws ec2 delete-volume --volume-id vol-049df61146c4d7901
supported assets and scan settings.
Learn about this finding type'sEfs Encrypted Check
Category name in the API: EFS_ENCRYPTED_CHECK
Amazon EFS supports two forms of encryption for file systems, encryption of data in transit and encryption at rest. This checks that all EFS file systems are configured with encryption-at-rest across all enabled regions in the account.
Recommendation: Checks whether EFS is configured to encrypt file data using KMS To remediate this finding, complete the following steps:Terraform
The following code snippet can be used to create a KMS encrypted EFS (Note: kms_key_id
attribute is optional, and a key will be created if no kms key id is passed)
resource "aws_efs_file_system" "encrypted-efs" {
creation_token = "my-kms-encrypted-efs"
encrypted = true
kms_key_id = "arn:aws:kms:us-west-2:12344375555:key/16393ebd-3348-483f-b162-99b6648azz23"
tags = {
Name = "MyProduct"
}
}
AWS console
To configure EFS with encryption using the AWS console, see Encrypting a file system at rest using the console.
AWS CLI
It is important to notice that while creating EFS from the console enables encryption at rest by default, that is not true for EFS created using the CLI, API or SDK. The following example allows you to create an encrypted file system in your infrastructure.
aws efs create-file-system \
--backup \
--encrypted \
--region us-east-1 \
supported assets and scan settings.
Learn about this finding type'sEfs In Backup Plan
Category name in the API: EFS_IN_BACKUP_PLAN
Amazon best practices recommend configuring backups for your Elastic File Systems (EFS). This checks all EFS across every enabled region in your AWS account for enabled backups.
Recommendation: Checks whether EFS filesystems are included in AWS Backup plans To remediate this finding, complete the following steps:Terraform
Use the aws_efs_backup_policy
resource to configure a backup policy for EFS file systems.
resource "aws_efs_file_system" "encrypted-efs" {
creation_token = "my-encrypted-efs"
encrypted = true
tags = merge({
Name = "${local.resource_prefix.value}-efs"
}, {
git_file = "terraform/aws/efs.tf"
git_org = "your_git_org"
git_repo = "your_git_repo"
})
}
resource "aws_efs_backup_policy" "policy" {
file_system_id = aws_efs_file_system.encrypted-efs.id
backup_policy {
status = "ENABLED"
}
}
AWS console
There are two options for backing up EFS: AWS Backup service and EFS-to-EFS backup solution. In order to remediate non-backed up EFS using the console, see:
AWS CLI
There are a few options to create compliant EFS file systems using the CLI:
- Create an EFS with automatic backup enabled (default for One Zone storage and conditional to backup availability in the AWS Region)
- Create an EFS and put a backup policy
However, assuming the remediation needs to happen in existing EFS, the best option is to create a backup policy and associate it to your non-compliant EFS. You will need one command for every EFS in your infrastructure.
arr=( $(aws efs describe-file-systems | jq -r '.FileSystems[].FileSystemId') )
for efs in "${arr[@]}"
do
aws efs put-backup-policy \
--file-system-id "${efs}" \
--backup-policy "Status=ENABLED"
done
supported assets and scan settings.
Learn about this finding type'sElb Acm Certificate Required
Category name in the API: ELB_ACM_CERTIFICATE_REQUIRED
Checks whether the Classic Load Balancer uses HTTPS/SSL certificates provided by AWS Certificate Manager (ACM). The control fails if the Classic Load Balancer configured with HTTPS/SSL listener does not use a certificate provided by ACM.
To create a certificate, you can use either ACM or a tool that supports the SSL and TLS protocols, such as OpenSSL. Security Hub recommends that you use ACM to create or import certificates for your load balancer.
ACM integrates with Classic Load Balancers so that you can deploy the certificate on your load balancer. You also should automatically renew these certificates.
Recommendation: Checks that all Classic Load Balancers use SSL certificates provided by AWS Certificate ManagerFor information about how to associate an ACM SSL/TLS certificate with a Classic Load Balancer, see the AWS Knowledge Center article How can I associate an ACM SSL/TLS certificate with a Classic, Application, or Network Load Balancer?
supported assets and scan settings.
Learn about this finding type'sElb Deletion Protection Enabled
Category name in the API: ELB_DELETION_PROTECTION_ENABLED
Checks whether an Application Load Balancer has deletion protection enabled. The control fails if deletion protection is not configured.
Enable deletion protection to protect your Application Load Balancer from deletion.
Recommendation: Application Load Balancer deletion protection should be enabled To remediate this finding, complete the following steps:AWS console
To prevent your load balancer from being deleted accidentally, you can enable deletion protection. By default, deletion protection is disabled for your load balancer.
If you enable deletion protection for your load balancer, you must disable delete protection before you can delete the load balancer.
To enable deletion protection from the console.
- Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
- On the navigation pane, under LOAD BALANCING, choose Load Balancers.
- Choose the load balancer.
- On the Description tab, choose Edit attributes.
- On the Edit load balancer attributes page, select Enable for Delete Protection, and then choose Save.
- Choose Save.
supported assets and scan settings.
Learn about this finding type'sElb Logging Enabled
Category name in the API: ELB_LOGGING_ENABLED
This checks whether the Application Load Balancer and the Classic Load Balancer have logging enabled. The control fails if access_logs.s3.enabled is false.
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues.
To learn more, see Access logs for your Classic Load Balancer in User Guide for Classic Load Balancers.
Recommendation: Checks whether classic and application load balancers have logging enabled To remediate this finding, complete the following steps:AWS console
To remediate this issue, update your load balancers to enable logging.
To enable access logs
- Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
- In the navigation pane, choose Load balancers.
- Choose an Application Load Balancer or Classic Load Balancer.
- From Actions, choose Edit attributes.
- Under Access logs, choose Enable.
- Enter your S3 location. This location can exist or it can be created for you. If you do not specify a prefix, the access logs are stored in the root of the S3 bucket.
- Choose Save.
supported assets and scan settings.
Learn about this finding type'sElb Tls Https Listeners Only
Category name in the API: ELB_TLS_HTTPS_LISTENERS_ONLY
This check ensures all Classic Load Balancers are configured to use secure communication.
A listener is a process that checks for connection requests. It is configured with a protocol and a port for front-end (client to load balancer) connections and a protocol and a port for back-end (load balancer to instance) connections. For information about the ports, protocols, and listener configurations supported by Elastic Load Balancing, see Listeners for your Classic Load Balancer.
Recommendation: Checks that all Classic Load Balancer are configured with SSL or HTTPS listenersTo configure SSL or TLS for classic load balancers see Create an HTTPS/SSL load balancer using the AWS Management Console.
supported assets and scan settings.
Learn about this finding type'sEncrypted Volumes
Category name in the API: ENCRYPTED_VOLUMES
Checks whether the EBS volumes that are in an attached state are encrypted. To pass this check, EBS volumes must be in use and encrypted. If the EBS volume is not attached, then it is not subject to this check.
For an added layer of security of your sensitive data in EBS volumes, you should enable EBS encryption at rest. Amazon EBS encryption offers a straightforward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure. It uses KMS keys when creating encrypted volumes and snapshots.
To learn more about Amazon EBS encryption, see Amazon EBS encryption in the Amazon EC2 User Guide for Linux Instances.
Recommendation: Attached Amazon EBS volumes should be encrypted at-rest To remediate this finding, complete the following steps:AWS console
There is no direct way to encrypt an existing unencrypted volume or snapshot. You can only encrypt a new volume or snapshot when you create it.
If you enabled encryption by default, Amazon EBS encrypts the resulting new volume or snapshot using your default key for Amazon EBS encryption. Even if you have not enabled encryption by default, you can enable encryption when you create an individual volume or snapshot. In both cases, you can override the default key for Amazon EBS encryption and choose a symmetric customer managed key.
For more information, see Creating an Amazon EBS volume and Copying an Amazon EBS snapshot in the Amazon EC2 User Guide for Linux Instances.
supported assets and scan settings.
Learn about this finding type'sEncryption At Rest Enabled Rds Instances
Category name in the API: ENCRYPTION_AT_REST_ENABLED_RDS_INSTANCES
Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance.
Recommendation: Ensure that encryption-at-rest is enabled for RDS Instances To remediate this finding, complete the following steps:AWS console
- Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/.
- In the left navigation panel, click on
Databases
- Select the Database instance that needs to be encrypted.
- Click on
Actions
button placed at the top right and selectTake Snapshot
. - On the Take Snapshot page, enter a database name of which you want to take a snapshot in the
Snapshot Name
field and click onTake Snapshot
. - Select the newly created snapshot and click on the
Action
button placed at the top right and selectCopy snapshot
from the Action menu. - On the Make Copy of DB Snapshot page, perform the following:
- In the New DB Snapshot Identifier field, Enter a name for the
new snapshot
. - Check
Copy Tags
, New snapshot must have the same tags as the source snapshot. - Select
Yes
from theEnable Encryption
dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list.
- Click
Copy Snapshot
to create an encrypted copy of the selected instance snapshot. - Select the new Snapshot Encrypted Copy and click on the
Action
button placed at the top right and selectRestore Snapshot
button from the Action menu, This will restore the encrypted snapshot to a new database instance. - On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field.
- Review the instance configuration details and click
Restore DB Instance
. - As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance.
AWS CLI
- Run
describe-db-instances
command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier.
aws rds describe-db-instances --region <region-name> --query 'DBInstances[*].DBInstanceIdentifier'
- Run
create-db-snapshot
command to create a snapshot for the selected database instance, The command output will return thenew snapshot
with name DB Snapshot Name.
aws rds create-db-snapshot --region <region-name> --db-snapshot-identifier <DB-Snapshot-Name> --db-instance-identifier <DB-Name>
- Now run
list-aliases
command to list the KMS keys aliases available in a specified region, The command output should return eachkey alias currently available
. For our RDS encryption activation process, locate the ID of the AWS default KMS key.
aws kms list-aliases --region <region-name>
- Run
copy-db-snapshot
command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return theencrypted instance snapshot configuration
.
aws rds copy-db-snapshot --region <region-name> --source-db-snapshot-identifier <DB-Snapshot-Name> --target-db-snapshot-identifier <DB-Snapshot-Name-Encrypted> --copy-tags --kms-key-id <KMS-ID-For-RDS>
- Run
restore-db-instance-from-db-snapshot
command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration.
aws rds restore-db-instance-from-db-snapshot --region <region-name> --db-instance-identifier <DB-Name-Encrypted> --db-snapshot-identifier <DB-Snapshot-Name-Encrypted>
- Run
describe-db-instances
command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted.
aws rds describe-db-instances --region <region-name> --query 'DBInstances[*].DBInstanceIdentifier'
- Run again
describe-db-instances
command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption statusTrue
.
aws rds describe-db-instances --region <region-name> --db-instance-identifier <DB-Name-Encrypted> --query 'DBInstances[*].StorageEncrypted'
supported assets and scan settings.
Learn about this finding type'sEncryption Enabled Efs File Systems
Category name in the API: ENCRYPTION_ENABLED_EFS_FILE_SYSTEMS
EFS data should be encrypted at rest using AWS KMS (Key Management Service).
Recommendation: Ensure that encryption is enabled for EFS file systems To remediate this finding, complete the following steps:AWS console
- Login to the AWS Management Console and Navigate to
Elastic File System (EFS)
dashboard. - Select
File Systems
from the left navigation panel. - Click
Create File System
button from the dashboard top menu to start the file system setup process. -
On the
Configure file system access
configuration page, perform the following actions.
- Choose the right VPC from the VPC dropdown list.
- Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets.
- ClickNext step
to continue. -
Perform the following on the
Configure optional settings
page.
- Createtags
to describe your new file system.
- Chooseperformance mode
based on your requirements.
- CheckEnable encryption
checkbox and chooseaws/elasticfilesystem
from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS.
- ClickNext step
to continue. -
Review the file system configuration details on the
review and create
page and then clickCreate File System
to create your new AWS EFS file system. - Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system.
- Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed.
- Change the AWS region from the navigation bar and repeat the entire process for other aws regions.
From CLI:
1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource):
aws efs describe-file-systems --region <region> --file-system-id <file-system-id from audit section step 2 output>
- The command output should return the requested configuration information.
- To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-file-system command. To create the required token, you can use a randomly generated UUID from "https://www.uuidgenerator.net".
- Run create-file-system command using the unique token created at the previous step.
aws efs create-file-system --region <region> --creation-token <Token (randomly generated UUID from step 3)> --performance-mode generalPurpose --encrypted
- The command output should return the new file system configuration metadata.
- Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target:
aws efs create-mount-target --region <region> --file-system-id <file-system-id> --subnet-id <subnet-id>
- The command output should return the new mount target metadata.
- Now you can mount your file system from an EC2 instance.
- Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system.
- Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed.
aws efs delete-file-system --region <region> --file-system-id <unencrypted-file-system-id>
- Change the AWS region by updating the --region and repeat the entire process for other aws regions.
supported assets and scan settings.
Learn about this finding type'sIam Password Policy
Category name in the API: IAM_PASSWORD_POLICY
AWS allows for custom password policies on your AWS account to specify complexity requirements and mandatory rotation periods for your IAM users' passwords. If you don't set a custom password policy, IAM user passwords must meet the default AWS password policy. AWS security best practices recommends the following password complexity requirements:
- Require at least one uppercase character in password.
- Require at least one lowercase character in passwords.
- Require at least one symbol in passwords.
- Require at least one number in passwords.
- Require a minimum password length of at least 14 characters.
- Require at least 24 passwords before allowing reuse.
- Require at least 90 before password expiration
This controls checks all of the specified password policy requirements.
Recommendation: Checks whether the account password policy for IAM users meets the specified requirements To remediate this finding, complete the following steps:Terraform
resource "aws_iam_account_password_policy" "strict" {
allow_users_to_change_password = true
require_uppercase_characters = true
require_lowercase_characters = true
require_symbols = true
require_numbers = true
minimum_password_length = 14
password_reuse_prevention = 24
max_password_age = 90
}
AWS console
To create a custom password policy
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the navigation pane, choose Account settings.
- In the Password policy section, choose Change password policy.
- Select the options that you want to apply to your password policy and choose Save changes.
To change a custom password policy
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the navigation pane, choose Account settings.
- In the Password policy section, choose Change.
- Select the options that you want to apply to your password policy and choose Save changes.
AWS CLI
aws iam update-account-password-policy \
--allow-users-to-change-password \
--require-uppercase-characters \
--require-lowercase-characters \
--require-symbols \
--require-numbers \
--minimum-password-length 14 \
--password-reuse-prevention 24 \
--max-password-age 90
supported assets and scan settings.
Learn about this finding type'sIam Password Policy Prevents Password Reuse
Category name in the API: IAM_PASSWORD_POLICY_PREVENTS_PASSWORD_REUSE
IAM password policies can prevent the reuse of a given password by the same user. It is recommended that the password policy prevent the reuse of passwords.
Recommendation: Ensure IAM password policy prevents password reuse To remediate this finding, complete the following steps:AWS console
- Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)
- Go to IAM Service on the AWS Console
- Click on Account Settings on the Left Pane
- Check "Prevent password reuse"
- Set "Number of passwords to remember" is set to
24
AWS CLI
aws iam update-account-password-policy --password-reuse-prevention 24
Note: All commands starting with "aws iam update-account-password-policy" can be combined into a single command.
supported assets and scan settings.
Learn about this finding type'sIam Password Policy Requires Minimum Length 14 Greater
Category name in the API: IAM_PASSWORD_POLICY_REQUIRES_MINIMUM_LENGTH_14_GREATER
Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. It is recommended that the password policy require a minimum password length 14.
Recommendation: Ensure IAM password policy requires minimum length of 14 or greater To remediate this finding, complete the following steps:AWS console
- Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)
- Go to IAM Service on the AWS Console
- Click on Account Settings on the Left Pane
- Set "Minimum password length" to
14
or greater. - Click "Apply password policy"
AWS CLI
aws iam update-account-password-policy --minimum-password-length 14
Note: All commands starting with "aws iam update-account-password-policy" can be combined into a single command.
supported assets and scan settings.
Learn about this finding type'sIam Policies Allow Full Administrative Privileges Attached
Category name in the API: IAM_POLICIES_ALLOW_FULL_ADMINISTRATIVE_PRIVILEGES_ATTACHED
IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant least privilege -that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges.
Recommendation: Ensure IAM policies that allow full "*:*" administrative privileges are not attached To remediate this finding, complete the following steps:AWS console
Perform the following to detach the policy that has full administrative privileges:
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the navigation pane, click Policies and then search for the policy name found in the audit step.
- Select the policy that needs to be deleted.
- In the policy action menu, select first
Detach
- Select all Users, Groups, Roles that have this policy attached
- Click
Detach Policy
- In the policy action menu, select
Detach
AWS CLI
Perform the following to detach the policy that has full administrative privileges as found in the audit step:
- Lists all IAM users, groups, and roles that the specified managed policy is attached to.
aws iam list-entities-for-policy --policy-arn <policy_arn>
- Detach the policy from all IAM Users:
aws iam detach-user-policy --user-name <iam_user> --policy-arn <policy_arn>
- Detach the policy from all IAM Groups:
aws iam detach-group-policy --group-name <iam_group> --policy-arn <policy_arn>
- Detach the policy from all IAM Roles:
aws iam detach-role-policy --role-name <iam_role> --policy-arn <policy_arn>
supported assets and scan settings.
Learn about this finding type'sIam Users Receive Permissions Groups
Category name in the API: IAM_USERS_RECEIVE_PERMISSIONS_GROUPS
IAM users are granted access to services, functions, and data through IAM policies. There are four ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy; 4) add the user to an IAM group that has an inline policy.
Only the third implementation is recommended.
Recommendation: Ensure IAM Users Receive Permissions Only Through GroupsPerform the following to create an IAM group and assign a policy to it:
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the navigation pane, click
Groups
and then clickCreate New Group
. - In the
Group Name
box, type the name of the group and then clickNext Step
. - In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click
Next Step
. - Click
Create Group
Perform the following to add a user to a given group:
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the navigation pane, click
Groups
- Select the group to add a user to
- Click
Add Users To Group
- Select the users to be added to the group
- Click
Add Users
Perform the following to remove a direct association between a user and policy:
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the left navigation pane, click on Users
- For each user:
- Select the user
- Click on thePermissions
tab
- ExpandPermissions policies
- ClickX
for each policy; then click Detach or Remove (depending on policy type)
supported assets and scan settings.
Learn about this finding type'sIam User Group Membership Check
Category name in the API: IAM_USER_GROUP_MEMBERSHIP_CHECK
IAM users should always be part of an IAM group in order to adhere to IAM security best practices.
By adding users to a group, it is possible to share policies among types of users.
Recommendation: Checks whether IAM users are members of at least one IAM group To remediate this finding, complete the following steps:Terraform
resource "aws_iam_user" "example" {
name = "test-iam-user"
path = "/users/dev/"
}
resource "aws_iam_group" "example" {
name = "Developers"
path = "/users/dev/"
}
resource "aws_iam_user_group_membership" "example" {
user = aws_iam_user.example.name
groups = [aws_iam_group.example.name]
}
AWS console
When you use the AWS Management Console to delete an IAM user, IAM automatically deletes the following information for you:
- The user
- Any user group memberships—that is, the user is removed from any IAM user groups that the user was a member of
- Any password associated with the user
- Any access keys belonging to the user
- All inline policies embedded in the user (policies that are applied to a user via user group permissions are not affected)
To delete an IAM user:
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the navigation pane, choose Users, and then select the check box next to the user name that you want to delete.
- At the top of the page, choose Delete.
- In the confirmation dialog box, enter the username in the text input field to confirm the deletion of the user.
- Choose Delete.
To add a user to an IAM user group:
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the navigation pane, choose User groups and then choose the name of the group.
- Choose the Users tab and then choose Add users. Select the check box next to the users you want to add.
- Choose Add users.
AWS CLI
Unlike the Amazon Web Services Management Console, when you delete a user programmatically, you must delete the items attached to the user manually, or the deletion fails.
Before attempting to delete a user, remove the following items:
- Password ( DeleteLoginProfile )
- Access keys ( DeleteAccessKey )
- Signing certificate ( DeleteSigningCertificate )
- SSH public key ( DeleteSSHPublicKey )
- Git credentials ( DeleteServiceSpecificCredential )
- Multi-factor authentication (MFA) device ( DeactivateMFADevice , DeleteVirtualMFADevice )
- Inline policies ( DeleteUserPolicy )
- Attached managed policies ( DetachUserPolicy )
- Group memberships ( RemoveUserFromGroup )
To delete a user, after deleting all items attached to the user:
aws iam delete-user \
--user-name "test-user"
To add an IAM user to an IAM group:
aws iam add-user-to-group \
--group-name "test-group"
--user-name "test-user"
supported assets and scan settings.
Learn about this finding type'sIam User Mfa Enabled
Category name in the API: IAM_USER_MFA_ENABLED
Multi-factor authentication (MFA) is a best practice that adds an extra layer of protection on top of user names and passwords. With MFA, when a user signs in to the AWS Management Console, they are required to provide a time-sensitive authentication code, provided by a registered virtual or physical device.
Recommendation: Checks whether the AWS IAM users have multi-factor authentication (MFA) enabled To remediate this finding, complete the following steps:Terraform
When it comes to Terraform, there are a few options to remediate the absence of MFA devices. You probably already have a sensible structure for organizing your users into groups and restrictive policies.
The following example shows how to:
- Create users.
- Create users login profiles with a PGP Public key.
- Create group and group policy that allows self management of IAM profile.
- Attach users to group.
- Create Virtual MFA devices for users.
- Provide each user with the output QR Code and password.
variable "users" {
type = set(string)
default = [
"test@example.com",
"test2@example.com"
]
}
resource "aws_iam_user" "test_users" {
for_each = toset(var.users)
name = each.key
}
resource "aws_iam_user_login_profile" "test_users_profile" {
for_each = var.users
user = each.key
# Key pair created using GnuPG, this is the public key
pgp_key = file("path/to/gpg_pub_key_base64.pem")
password_reset_required = true
lifecycle {
ignore_changes = [
password_length,
password_reset_required,
pgp_key,
]
}
}
resource "aws_iam_virtual_mfa_device" "test_mfa" {
for_each = toset(var.users)
virtual_mfa_device_name = each.key
}
resource "aws_iam_group" "enforce_mfa_group" {
name = "EnforceMFAGroup"
}
resource "aws_iam_group_membership" "enforce_mfa_group_membership" {
name = "EnforceMFAGroupMembership"
group = aws_iam_group.enforce_mfa_group.name
users = [for k in aws_iam_user.test_users : k.name]
}
resource "aws_iam_group_policy" "enforce_mfa_policy" {
name = "EnforceMFAGroupPolicy"
group = aws_iam_group.enforce_mfa_group.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowViewAccountInfo",
"Effect": "Allow",
"Action": [
"iam:GetAccountPasswordPolicy",
"iam:ListVirtualMFADevices"
],
"Resource": "*"
},
{
"Sid": "AllowManageOwnPasswords",
"Effect": "Allow",
"Action": [
"iam:ChangePassword",
"iam:GetUser"
],
"Resource": "arn:aws:iam::*:user/$${aws:username}"
},
{
"Sid": "AllowManageOwnAccessKeys",
"Effect": "Allow",
"Action": [
"iam:CreateAccessKey",
"iam:DeleteAccessKey",
"iam:ListAccessKeys",
"iam:UpdateAccessKey"
],
"Resource": "arn:aws:iam::*:user/$${aws:username}"
},
{
"Sid": "AllowManageOwnSigningCertificates",
"Effect": "Allow",
"Action": [
"iam:DeleteSigningCertificate",
"iam:ListSigningCertificates",
"iam:UpdateSigningCertificate",
"iam:UploadSigningCertificate"
],
"Resource": "arn:aws:iam::*:user/$${aws:username}"
},
{
"Sid": "AllowManageOwnSSHPublicKeys",
"Effect": "Allow",
"Action": [
"iam:DeleteSSHPublicKey",
"iam:GetSSHPublicKey",
"iam:ListSSHPublicKeys",
"iam:UpdateSSHPublicKey",
"iam:UploadSSHPublicKey"
],
"Resource": "arn:aws:iam::*:user/$${aws:username}"
},
{
"Sid": "AllowManageOwnGitCredentials",
"Effect": "Allow",
"Action": [
"iam:CreateServiceSpecificCredential",
"iam:DeleteServiceSpecificCredential",
"iam:ListServiceSpecificCredentials",
"iam:ResetServiceSpecificCredential",
"iam:UpdateServiceSpecificCredential"
],
"Resource": "arn:aws:iam::*:user/$${aws:username}"
},
{
"Sid": "AllowManageOwnVirtualMFADevice",
"Effect": "Allow",
"Action": [
"iam:CreateVirtualMFADevice",
"iam:DeleteVirtualMFADevice"
],
"Resource": "arn:aws:iam::*:mfa/$${aws:username}"
},
{
"Sid": "AllowManageOwnUserMFA",
"Effect": "Allow",
"Action": [
"iam:DeactivateMFADevice",
"iam:EnableMFADevice",
"iam:ListMFADevices",
"iam:ResyncMFADevice"
],
"Resource": "arn:aws:iam::*:user/$${aws:username}"
},
{
"Sid": "DenyAllExceptListedIfNoMFA",
"Effect": "Deny",
"NotAction": [
"iam:CreateVirtualMFADevice",
"iam:EnableMFADevice",
"iam:GetUser",
"iam:ListMFADevices",
"iam:ListVirtualMFADevices",
"iam:ResyncMFADevice",
"sts:GetSessionToken"
],
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
]
}
POLICY
}
output "user_password_map" {
# Outputs a map in the format {"test@example.com": <PGPEncryptedPassword>, "test2@example.com": <PGPEncryptedPassword>}
value = { for k, v in aws_iam_user_login_profile.test_users_profile : k => v.password }
}
output "user_qr_map" {
# Outputs a map in the format {"test@example.com": <QRCode>, "test2@example.com": <QRCode>}
value = { for k, v in aws_iam_virtual_mfa_device.test_mfa : k => v.qr_code_png }
}
AWS console
To enable MFA for any user accounts with AWS console access, see Enabling a virtual multi-factor authentication (MFA) device (console) in the AWS documentation.
AWS CLI
Create an MFA device
aws iam create-virtual-mfa-device \
--virtual-mfa-device-name "test@example.com" \
--outfile ./QRCode.png \
--bootstrap-method QRCodePNG
Enable MFA device for existing user
aws iam enable-mfa-device \
--user-name "test@example.com" \
--serial-number "arn:aws:iam::123456976749:mfa/test@example.com" \
--authentication-code1 123456 \
--authentication-code2 654321
supported assets and scan settings.
Learn about this finding type'sIam User Unused Credentials Check
Category name in the API: IAM_USER_UNUSED_CREDENTIALS_CHECK
This checks for any IAM passwords or active access keys that have not been used in the last 90 days.
Best practices recommends that you remove, deactivate or rotate all credentials unused for 90 days or more. This reduces the window of opportunity for credentials associated to a compromised or abandoned account to be used.
Recommendation: Checks that all AWS IAM users have passwords or active access keys that have not been used in maxCredentialUsageAge days (default 90) To remediate this finding, complete the following steps:Terraform
In order to remove expired Access Keys created via Terraform, remove the aws_iam_access_key
resource from your module and apply the change.
In order to reset an IAM user login password, use the -replace
when running terraform apply
.
Supposing the following user login profile
resource "aws_iam_user" "example" {
name = "test@example.com"
path = "/users/"
force_destroy = true
}
resource "aws_iam_user_login_profile" "example" {
user = aws_iam_user.example.name
pgp_key = "keybase:some_person_that_exists"
}
Run the following command to reset the user's login profile password
terraform apply -replace="aws_iam_user_login_profile.example"
AWS console
To disable credentials for inactive accounts:
- Open the IAM console at https://console.aws.amazon.com/iam/.
- Choose Users.
- Choose the name of the user that has credentials over 90 days old/last used.
- Choose Security credentials.
- For each sign-in credential and access key that hasn't been used in at least 90 days, choose Make inactive.
To require a new password from console users on next login:
- Open the IAM console at https://console.aws.amazon.com/iam/.
- Choose Users.
- Choose the name of the user that has credentials over 90 days old/last used.
- Choose Security credentials.
- Under Sign-in credentials and console password, choose Manage.
- Set a new password (autogenerated or custom).
- Check the box to Require password reset.
- Choose Apply.
AWS CLI
To make Access Keys inactive
aws iam update-access-key \
--access-key-id <value> \
--status "Inactive"
To delete Access Keys
aws iam delete-access-key \
--access-key-id <value>
To reset a user login profile password
aws iam update-login-profile \
--user-name "test@example.com" \
--password <temporary_password> \
--password-reset-required
supported assets and scan settings.
Learn about this finding type'sKms Cmk Not Scheduled For Deletion
Category name in the API: KMS_CMK_NOT_SCHEDULED_FOR_DELETION
This control checks whether KMS keys are scheduled for deletion. The control fails if a KMS key is scheduled for deletion.
KMS keys cannot be recovered once deleted. Data encrypted under a KMS key is also permanently unrecoverable if the KMS key is deleted. If meaningful data has been encrypted under a KMS key scheduled for deletion, consider decrypting the data or re-encrypting the data under a new KMS key unless you are intentionally performing a cryptographic erasure.
When a KMS key is scheduled for deletion, a mandatory waiting period is enforced to allow time to reverse the deletion, if it was scheduled in error. The default waiting period is 30 days, but it can be reduced to as short as 7 days when the KMS key is scheduled for deletion. During the waiting period, the scheduled deletion can be canceled and the KMS key will not be deleted.
For additional information regarding deleting KMS keys, see Deleting KMS keys in the AWS Key Management Service Developer Guide.
Recommendation: Checks that all CMKs are not scheduled for deletionTo cancel a scheduled KMS key deletion, see To cancel key deletion under Scheduling and canceling key deletion (console) in the AWS Key Management Service Developer Guide.
supported assets and scan settings.
Learn about this finding type'sLambda Concurrency Check
Category name in the API: LAMBDA_CONCURRENCY_CHECK
Checks if the Lambda function is configured with a function-level concurrent execution limit. The rule is NON_COMPLIANT if the Lambda function is not configured with a function-level concurrent execution limit.
Recommendation: Checks whether Lambda functions are configured with function-level concurrent execution limitTo configure a function-level concurrent execution limit see Configuring reserved concurrency in the AWS Lambda documentation.
supported assets and scan settings.
Learn about this finding type'sLambda Dlq Check
Category name in the API: LAMBDA_DLQ_CHECK
Checks if a Lambda function is configured with a dead-letter queue. The rule is NON_COMPLIANT if the Lambda function is not configured with a dead-letter queue.
Recommendation: Checks whether Lambda functions are configured with a dead letter queueTo update Lambda functions to use DLQs, see Dead-letter queues in the AWS documentation.
supported assets and scan settings.
Learn about this finding type'sLambda Function Public Access Prohibited
Category name in the API: LAMBDA_FUNCTION_PUBLIC_ACCESS_PROHIBITED
AWS best practices recommend that Lambda function should not be publicly exposed. This policy checks all Lambda functions deployed across all enabled regions within your account and will fail if they are configured ot allow public access.
Recommendation: Checks whether the policy attached to the Lambda function prohibits public access To remediate this finding, complete the following steps:Terraform
The following example provides an example of using Terraform to provision an IAM role restricting access to a Lambda function and attaching that role to the Lambda function
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = aws_iam_role.iam_for_lambda.arn
handler = "index.test"
source_code_hash = filebase64sha256("lambda_function_payload.zip")
runtime = "nodejs12.x"
}
AWS console
If a Lambda function fails this control, it indicates that the resource-based policy statement for the Lambda function allows public access.
To remediate the issue, you must update the policy to remove the permissions or to add the AWS:SourceAccount
condition. You can only update the resource-based policy from the Lambda API.
The following instructions use the console to review the policy and the AWS Command Line Interface to remove the permissions.
To view the resource-based policy for a Lambda function
- Open the AWS Lambda console at https://console.aws.amazon.com/lambda/.
- In the navigation pane, choose Functions.
- Choose the function.
- Choose Permissions. The resource-based policy shows the permissions that are applied when another account or AWS service attempts to access the function.
- Examine the resource-based policy.
- Identify the policy statement that has Principal field values that make the policy public. For example, allowing
"*"
or{ "AWS": "*" }
.
You cannot edit the policy from the console. To remove permissions from the function, you use the remove-permission command from the AWS CLI.
Note the value of the statement ID (Sid) for the statement that you want to remove.
AWS CLI
To use the CLI to remove permissions from a Lambda function, issue the remove-permission
command as follows.
aws lambda remove-permission \
--function-name <value> \
--statement-id <value>
supported assets and scan settings.
Learn about this finding type'sLambda Inside Vpc
Category name in the API: LAMBDA_INSIDE_VPC
Checks whether a Lambda function is in a VPC. You might see failed findings for Lambda@Edge resources.
It does not evaluate the VPC subnet routing configuration to determine public reachability.
Recommendation: Checks whether the Lambda functions exists within a VPC To remediate this finding, complete the following steps:AWS console
To configure a function to connect to private subnets in a virtual private cloud (VPC) in your account:
- Open the AWS Lambda console at https://console.aws.amazon.com/lambda/.
- Navigate to Functions and then select your Lambda function.
- Scroll to Network and then select a VPC with the connectivity requirements of the function.
- To run your functions in high availability mode, Security Hub recommends that you choose at least two subnets.
- Choose at least one security group that has the connectivity requirements of the function.
- Choose Save.
For more information see the section on configuring a Lambda function to access resources in a VPC in the AWS Lambda Developer Guide.
supported assets and scan settings.
Learn about this finding type'sMfa Delete Enabled S3 Buckets
Category name in the API: MFA_DELETE_ENABLED_S3_BUCKETS
Once MFA Delete is enabled on your sensitive and classified S3 bucket it requires the user to have two forms of authentication.
Recommendation: Ensure MFA Delete is enabled on S3 bucketsPerform the steps below to enable MFA delete on an S3 bucket.
Note:
-You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API.
-You must use your 'root' account to enable MFA Delete on S3 buckets.
From Command line:
- Run the s3api put-bucket-versioning command
aws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode”
supported assets and scan settings.
Learn about this finding type'sMfa Enabled Root User Account
Category name in the API: MFA_ENABLED_ROOT_USER_ACCOUNT
The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.
Note: When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. ("non-personal virtual MFA") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company.
Recommendation: Ensure MFA is enabled for the 'root' user accountPerform the following to establish MFA for the 'root' user account:
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.
- Choose
Dashboard
, and underSecurity Status
, expandActivate MFA
on your root account. - Choose
Activate MFA
- In the wizard, choose
A virtual MFA
device and then chooseNext Step
. - IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.
- Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications.) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).
- Determine whether the MFA app supports QR codes, and then do one of the following:
- Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.
- In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.
When you are finished, the virtual MFA device starts generating one-time passwords.
In the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA.
supported assets and scan settings.
Learn about this finding type'sMulti Factor Authentication Mfa Enabled All Iam Users Console
Category name in the API: MULTI_FACTOR_AUTHENTICATION_MFA_ENABLED_ALL_IAM_USERS_CONSOLE
Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional credentials. With MFA enabled, when a user signs in to the AWS Console, they will be prompted for their user name and password as well as for an authentication code from their physical or virtual MFA token. It is recommended that MFA be enabled for all accounts that have a console password.
Recommendation: Ensure multi-factor authentication (MFA) is enabled for all IAM users that have a console password To remediate this finding, complete the following steps:AWS console
- Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/'
- In the left pane, select
Users
. - In the
User Name
list, choose the name of the intended MFA user. - Choose the
Security Credentials
tab, and then chooseManage MFA Device
. - In the
Manage MFA Device wizard
, chooseVirtual MFA
device, and then chooseContinue
.
IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.
- Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).
- Determine whether the MFA app supports QR codes, and then do one of the following:
- Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.
- In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.
When you are finished, the virtual MFA device starts generating one-time passwords.
-
In the
Manage MFA Device wizard
, in theMFA Code 1 box
, type theone-time password
that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the secondone-time password
into theMFA Code 2 box
. -
Click
Assign MFA
.
supported assets and scan settings.
Learn about this finding type'sNo Network Acls Allow Ingress 0 0 0 0 Remote Server Administration
Category name in the API: NO_NETWORK_ACLS_ALLOW_INGRESS_0_0_0_0_REMOTE_SERVER_ADMINISTRATION
The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port 22
and RDP to port 3389
, using either the TDP (6), UDP (17) or ALL (-1) protocols
AWS console
Perform the following:
1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home
2. In the left pane, click Network ACLs
3. For each network ACL to remediate, perform the following:
- Select the network ACL
- Click the Inbound Rules
tab
- Click Edit inbound rules
- Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click Delete
to remove the offending inbound rule
- Click Save
supported assets and scan settings.
Learn about this finding type'sNo Root User Account Access Key Exists
Category name in the API: NO_ROOT_USER_ACCOUNT_ACCESS_KEY_EXISTS
The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be deleted.
Recommendation: Ensure no 'root' user account access key exists To remediate this finding, complete the following steps:AWS console
- Sign in to the AWS Management Console as 'root' and open the IAM console at https://console.aws.amazon.com/iam/.
- Click on
<root_account>
at the top right and selectMy Security Credentials
from the drop down list. - On the pop out screen Click on
Continue to Security Credentials
. - Click on
Access Keys
(Access Key ID and Secret Access Key). - Under the
Status
column (if there are any Keys which are active). - Click
Delete
(Note: Deleted keys cannot be recovered).
Note: While a key can be made inactive, this inactive key will still show up in the CLI command from the audit procedure, and may lead to a key being falsely flagged as being non-compliant.
supported assets and scan settings.
Learn about this finding type'sNo Security Groups Allow Ingress 0 0 0 0 Remote Server Administration
Category name in the API: NO_SECURITY_GROUPS_ALLOW_INGRESS_0_0_0_0_REMOTE_SERVER_ADMINISTRATION
Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22
and RDP to port 3389
, using either the TDP (6), UDP (17) or ALL (-1) protocols
Perform the following to implement the prescribed state:
- Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home
- In the left pane, click
Security Groups
- For each security group, perform the following:
- Select the security group
- Click the
Inbound Rules
tab - Click the
Edit inbound rules
button - Identify the rules to be edited or removed
- Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click
Delete
to remove the offending inbound rule - Click
Save rules
supported assets and scan settings.
Learn about this finding type'sNo Security Groups Allow Ingress 0 Remote Server Administration
Category name in the API: NO_SECURITY_GROUPS_ALLOW_INGRESS_0_REMOTE_SERVER_ADMINISTRATION
Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22
and RDP to port 3389
.
Perform the following to implement the prescribed state:
- Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home
- In the left pane, click
Security Groups
- For each security group, perform the following:
- Select the security group
- Click the
Inbound Rules
tab - Click the
Edit inbound rules
button - Identify the rules to be edited or removed
- Either A) update the Source field to a range other than ::/0, or, B) Click
Delete
to remove the offending inbound rule - Click
Save rules
supported assets and scan settings.
Learn about this finding type'sOne Active Access Key Available Any Single Iam User
Category name in the API: ONE_ACTIVE_ACCESS_KEY_AVAILABLE_ANY_SINGLE_IAM_USER
Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK)
Recommendation: Ensure there is only one active access key available for any single IAM user To remediate this finding, complete the following steps:AWS console
- Sign in to the AWS Management Console and navigate to IAM dashboard at
https://console.aws.amazon.com/iam/
. - In the left navigation panel, choose
Users
. - Click on the IAM user name that you want to examine.
- On the IAM user configuration page, select
Security Credentials
tab. - In
Access Keys
section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. - In the same
Access Keys
section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking theMake Inactive
link. - If you receive the
Change Key Status
confirmation box, clickDeactivate
to switch off the selected key. - Repeat steps no. 3 – 7 for each IAM user in your AWS account.
AWS CLI
-
Using the IAM user and access key information provided in the
Audit CLI
, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. -
Run the
update-access-key
command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user
Note - the command does not return any output:
aws iam update-access-key --access-key-id <access-key-id> --status Inactive --user-name <user-name>
- To confirm that the selected access key pair has been successfully
deactivated
run thelist-access-keys
audit command again for that IAM User:
aws iam list-access-keys --user-name <user-name>
- The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s)
Status
is set toInactive
, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation.
- Repeat steps no. 1 – 3 for each IAM user in your AWS account.
supported assets and scan settings.
Learn about this finding type'sPublic Access Given Rds Instance
Category name in the API: PUBLIC_ACCESS_GIVEN_RDS_INSTANCE
Ensure and verify that RDS database instances provisioned in your AWS account do restrict unauthorized access in order to minimize security risks. To restrict access to any publicly accessible RDS database instance, you must disable the database Publicly Accessible flag and update the VPC security group associated with the instance.
Recommendation: Ensure that public access is not given to RDS Instance To remediate this finding, complete the following steps:AWS console
- Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.
- Under the navigation panel, On RDS Dashboard, click
Databases
. - Select the RDS instance that you want to update.
- Click
Modify
from the dashboard top menu. - On the Modify DB Instance panel, under the
Connectivity
section, click onAdditional connectivity configuration
and update the value forPublicly Accessible
to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations:
- Select theConnectivity and security
tab, and click on the VPC attribute value inside theNetworking
section.
- Select theDetails
tab from the VPC dashboard bottom panel and click on Route table configuration attribute value.
- On the Route table details page, select the Routes tab from the dashboard bottom panel and click onEdit routes
.
- On the Edit routes page, update the Destination of Target which is set toigw-xxxxx
and click onSave
routes. - On the Modify DB Instance panel Click on
Continue
and In the Scheduling of modifications section, perform one of the following actions based on your requirements:
- Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window.
- Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application. - Repeat steps 3 to 6 for each RDS instance available in the current region.
- Change the AWS region from the navigation bar to repeat the process for other regions.
AWS CLI
- Run
describe-db-instances
command to list all RDS database names identifiers, available in the selected AWS region:
aws rds describe-db-instances --region <region-name> --query 'DBInstances[*].DBInstanceIdentifier'
- The command output should return each database instance identifier.
- Run
modify-db-instance
command to modify the selected RDS instance configuration. Then use the following command to disable thePublicly Accessible
flag for the selected RDS instances. This command use the apply-immediately flag. If you wantto avoid any downtime --no-apply-immediately flag can be used
:
aws rds modify-db-instance --region <region-name> --db-instance-identifier <db-name> --no-publicly-accessible --apply-immediately
- The command output should reveal the
PubliclyAccessible
configuration under pending values and should get applied at the specified time. - Updating the Internet Gateway Destination via AWS CLI is not currently supported To update information about Internet Gateway use the AWS Console Procedure.
- Repeat steps 1 to 5 for each RDS instance provisioned in the current region.
- Change the AWS region by using the --region filter to repeat the process for other regions.
supported assets and scan settings.
Learn about this finding type'sRds Enhanced Monitoring Enabled
Category name in the API: RDS_ENHANCED_MONITORING_ENABLED
Enhanced monitoring provides real-time metrics on the operating system that the RDS instance runs on, via an agent installed in the instance.
For more details, see Monitoring OS metrics with Enhanced Monitoring.
Recommendation: Checks whether enhanced monitoring is enabled for all RDS DB instances To remediate this finding, complete the following steps:Terraform
To remediate this control, enable Enhanced Monitoring on your RDS instances as follows:
Create an IAM Role for RDS:
resource "aws_iam_role" "rds_logging" {
name = "CustomRoleForRDSMonitoring"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = "CustomRoleForRDSLogging"
Principal = {
Service = "monitoring.rds.amazonaws.com"
}
},
]
})
}
Retrieve the AWS Managed Policy for RDS Enhanced Monitoring:
data "aws_iam_policy" "rds_logging" {
name = "AmazonRDSEnhancedMonitoringRole"
}
Attach the policy to the role:
resource "aws_iam_policy_attachment" "rds_logging" {
name = "AttachRdsLogging"
roles = [aws_iam_role.rds_logging.name]
policy_arn = data.aws_iam_policy.rds_logging.arn
}
Define a monitoring interval and a monitoring role arn to the violarting RDS instance to enable Enhanced Monitoring:
resource "aws_db_instance" "default" {
identifier = "test-rds"
allocated_storage = 10
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
db_name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
monitoring_interval = 60
monitoring_role_arn = aws_iam_role.rds_logging.arn
}
AWS console
You can turn on Enhanced Monitoring when you create a DB instance, Multi-AZ DB cluster, or read replica, or when you modify a DB instance or Multi-AZ DB cluster. If you modify a DB instance to turn on Enhanced Monitoring, you don't need to reboot your DB instance for the change to take effect.
You can turn on Enhanced Monitoring in the RDS console when you do one of the following actions in the Databases page:
- Create a DB instance or Multi-AZ DB cluster - Choose Create database.
- Create a read replica - Choose Actions, then Create read replica.
- Modify a DB instance or Multi-AZ DB cluster - Choose Modify.
To turn Enhanced Monitoring on or off in the RDS console
- Scroll to Additional configuration.
- In Monitoring, choose Enable Enhanced Monitoring for your DB instance or read replica. To turn Enhanced Monitoring off, choose Disable Enhanced Monitoring.
- Set the Monitoring Role property to the IAM role that you created to permit Amazon RDS to communicate with Amazon CloudWatch Logs for you, or choose Default to have RDS create a role for you named rds-monitoring-role.
- Set the Granularity property to the interval, in seconds, between points when metrics are collected for your DB instance or read replica. The Granularity property can be set to one of the following values: 1, 5, 10, 15, 30, or 60. The fastest that the RDS console refreshes is every 5 seconds. If you set the granularity to 1 second in the RDS console, you still see updated metrics only every 5 seconds. You can retrieve 1-second metric updates by using CloudWatch Logs.
AWS CLI
Create the RDS IAM role:
aws iam create-role \
--role-name "CustomRoleForRDSMonitoring" \
--assume-role-policy-document file://rds-assume-role.json
Attach the policy AmazonRDSEnhancedMonitoringRole
to the role:
aws iam attach-role-policy \
--role-name "CustomRoleForRDSMonitoring"\
--policy-arn "arn:aws:iam::aws:policy/service-role/AmazonRDSEnhancedMonitoringRole"
Modify the RDS instance to enable Enhanced Monitoring, by setting --monitoring-interval
and --monitoring-role-arn
:
aws rds modify-db-instance \
--db-instance-identifier "test-rds" \
--monitoring-interval 30 \
--monitoring-role-arn "arn:aws:iam::<account_id>:role/CustomRoleForRDSMonitoring"
supported assets and scan settings.
Learn about this finding type'sRds Instance Deletion Protection Enabled
Category name in the API: RDS_INSTANCE_DELETION_PROTECTION_ENABLED
Enabling instance deletion protection is an additional layer of protection against accidental database deletion or deletion by an unauthorized entity.
While deletion protection is enabled, an RDS DB instance cannot be deleted. Before a deletion request can succeed, deletion protection must be disabled.
Recommendation: Checks if all RDS instances have deletion protection enabled To remediate this finding, complete the following steps:Terraform
In order to remediate this control, set deletion_protection
to true
in the aws_db_instance
resource.
resource "aws_db_instance" "example" {
# ... other configuration ...
deletion_protection = true
}
AWS console
To enable deletion protection for an RDS DB instance
- Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
- In the navigation pane, choose Databases, then choose the DB instance that you want to modify.
- Choose Modify.
- Under Deletion protection, choose Enable deletion protection.
- Choose Continue.
- Under Scheduling of modifications, choose when to apply modifications. The options are Apply during the next scheduled maintenance window or Apply immediately.
- Choose Modify DB Instance.
AWS CLI
Same applies to the AWS CLI. Set --deletion-protection
as below.
aws rds modify-db-instance \
--db-instance-identifier = "test-rds" \
--deletion-protection
supported assets and scan settings.
Learn about this finding type'sRds In Backup Plan
Category name in the API: RDS_IN_BACKUP_PLAN
This check evaluates if Amazon RDS DB instances are covered by a backup plan. This control fails if an RDS DB instance isn't covered by a backup plan.
AWS Backup is a fully managed backup service that centralizes and automates the backing up of data across AWS services. With AWS Backup, you can create backup policies called backup plans. You can use these plans to define your backup requirements, such as how frequently to back up your data and how long to retain those backups. Including RDS DB instances in a backup plan helps you protect your data from unintended loss or deletion.
Recommendation: RDS DB instances should be covered by a backup plan To remediate this finding, complete the following steps:Terraform
In order to remediate this control, set the backup_retention_period
to a value greater than 7
in the aws_db_instance
resource.
resource "aws_db_instance" "example" {
# ... other Configuration ...
backup_retention_period = 7
}
AWS console
To enable automated backups immediately
- Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
- In the navigation pane, choose Databases, and then choose the DB instance that you want to modify.
- Choose Modify to open the Modify DB Instance page.
- Under Backup Retention Period, choose a positive nonzero value, for example 30 days, then choose Continue.
- Select the Scheduling of modifications section and choose when to apply modifications: you can choose to Apply during the next scheduled maintenance window or Apply immediately.
- Then, on the confirmation page, choose Modify DB Instance to save your changes and enable automated backups.
AWS CLI
Same applies to the AWS CLI. In order to enable automatic backups, change the backup-retention-period
to a value greater than 0
(default).
aws rds modify-db-instance --db-instance-identifier "test-rds" --backup-retention-period 7
supported assets and scan settings.
Learn about this finding type'sRds Logging Enabled
Category name in the API: RDS_LOGGING_ENABLED
This checks whether the following logs of Amazon RDS are enabled and sent to CloudWatch.
RDS databases should have relevant logs enabled. Database logging provides detailed records of requests made to RDS. Database logs can assist with security and access audits and can help to diagnose availability issues.
Recommendation: Checks if exported logs are enabled for all RDS DB instances To remediate this finding, complete the following steps:Terraform
resource "aws_db_instance" "example" {
# ... other configuration for MySQL ...
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
parameter_group_name = aws_db_parameter_group.example.name
}
resource "aws_db_parameter_group" "example" {
name = "${aws_db_instance.example.dbInstanceIdentifier}-parameter-group"
family = "mysql5.7"
parameter {
name = "general_log"
value = 1
}
parameter {
name = "slow_query_log"
value = 1
}
parameter {
name = "log_output"
value = "FILE"
}
}
For MariaDB, additionally create a custom option group and set option_group_name
in the aws_db_instance
resource.
resource "aws_db_instance" "example" {
# ... other configuration for MariaDB ...
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
parameter_group_name = aws_db_parameter_group.example.name
option_group_name = aws_db_option_group.example.name
}
resource "aws_db_option_group" "example" {
name = "mariadb-option-group-for-logs"
option_group_description = "MariaDB Option Group for Logs"
engine_name = "mariadb"
option {
option_name = "MARIADB_AUDIT_PLUGIN"
option_settings {
name = "SERVER_AUDIT_EVENTS"
value = "CONNECT,QUERY,TABLE,QUERY_DDL,QUERY_DML,QUERY_DCL"
}
}
}
AWS console
To create a custom DB parameter group
- Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
- In the navigation pane, choose Parameter groups.
- Choose Create parameter group.
- In the Parameter group family list, choose a DB parameter group family.
- In the Type list, choose DB Parameter Group.
- In Group name, enter the name of the new DB parameter group.
- In Description, enter a description for the new DB parameter group.
- Choose Create.
To create a new option group for MariaDB logging by using the console
- Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
- In the navigation pane, choose Option groups.
- Choose Create group.
- In the Create option group window, provide the following:
* Name: must be unique within your AWS account. Only letters, digits, and hyphens.
* Description: Only used for display purposes.
* Engine: select your DB engine.
* Major engine version: select the major version of your DB engine. - Choose Create.
- Choose the name of the option group you just created.
- Choose Add option.
- Choose MARIADB_AUDIT_PLUGIN from the Option name list.
- Set SERVER_AUDIT_EVENTS to CONNECT, QUERY, TABLE, QUERY_DDL, QUERY_DML, QUERY_DCL.
- Choose Add option.
To publish SQL Server DB, Oracle DB, or PostgreSQL logs to CloudWatch Logs from the AWS Management Console
- Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
- In the navigation pane, choose Databases.
- Choose the DB instance that you want to modify.
- Choose Modify.
- Under Log exports, choose all of the log files to start publishing to CloudWatch Logs.
- Log exports is available only for database engine versions that support publishing to CloudWatch Logs.
- Choose Continue. Then on the summary page, choose Modify DB Instance.
To apply a new DB parameter group or DB options group to an RDS DB instance
- Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
- In the navigation pane, choose Databases.
- Choose the DB instance that you want to modify.
- Choose Modify.
- Under Database options, change the DB parameter group and DB options group as needed.
- When you finish you changes, choose Continue. Check the summary of modifications.
- Choose Modify DB Instance to save your changes.
AWS CLI
Retrieve the engine families and choose the one that matches the DB instance engine and version.
aws rds describe-db-engine-versions \
--query "DBEngineVersions[].DBParameterGroupFamily" \
--engine "mysql"
Create a parameter group according to the engine and version.
aws rds create-db-parameter-group \
--db-parameter-group-name "rds-mysql-parameter-group" \
--db-parameter-group-family "mysql5.7" \
--description "Example parameter group for logs"
Create an rds-parameters.json
file containing the necessary parameters according to the DB Engine, this example uses MySQL5.7.
[
{
"ParameterName": "general_log",
"ParameterValue": "1",
"ApplyMethod": "immediate"
},
{
"ParameterName": "slow_query_log",
"ParameterValue": "1",
"ApplyMethod": "immediate"
},
{
"ParameterName": "log_output",
"ParameterValue": "FILE",
"ApplyMethod": "immediate"
}
]
Modify the parameter group to add the parameters according to the DB engine. This example uses MySQL5.7
aws rds modify-db-parameter-group \
--db-parameter-group-name "rds-mysql-parameter-group" \
--parameters file://rds-parameters.json
Modify the DB instance to associate the parameter group.
aws rds modify-db-instance \
--db-instance-identifier "test-rds" \
--db-parameter-group-name "rds-mysql-parameter-group"
Additionally for MariaDB, create an option group as follows.
aws rds create-option-group \
--option-group-name "rds-mariadb-option-group" \
--engine-name "mariadb" \
--major-engine-version "10.6" \
--option-group-description "Option group for MariaDB logs"
Create a rds-mariadb-options.json
file as follows.
{
"OptionName": "MARIADB_AUDIT_PLUGIN",
"OptionSettings": [
{
"Name": "SERVER_AUDIT_EVENTS",
"Value": "CONNECT,QUERY,TABLE,QUERY_DDL,QUERY_DML,QUERY_DCL"
}
]
}
Add the option to the option group.
aws rds add-option-to-option-group \
--option-group-name "rds-mariadb-option-group" \
--options file://rds-mariadb-options.json
Associate the option group to the DB Instance by modifying the MariaDB instance.
aws rds modify-db-instance \
--db-instance-identifier "rds-test-mariadb" \
--option-group-name "rds-mariadb-option-group"
supported assets and scan settings.
Learn about this finding type'sRds Multi Az Support
Category name in the API: RDS_MULTI_AZ_SUPPORT
RDS DB instances should be configured for multiple Availability Zones (AZs). This ensures the availability of the data stored. Multi-AZ deployments allow for automated failover if there is an issue with Availability Zone availability and during regular RDS maintenance.
Recommendation: Checks whether high availability is enabled for all RDS DB instances To remediate this finding, complete the following steps:Terraform
In order to remediate this control, set multi_az
to true in the aws_db_instance
resource.
resource "aws_db_instance" "example" {
# ... other configuration ...
multi_az = true
}
AWS console
To enable multiple Availability Zones for a DB instance
- Open the Amazon RDS console at https://console.aws.amazon.com/rds/.
- In the navigation pane, choose Databases, and then choose the DB instance that you want to modify.
- Choose Modify. The Modify DB Instance page appears.
- Under Instance Specifications, set Multi-AZ deployment to Yes.
- Choose Continue and then check the summary of modifications.
- (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option can cause an outage in some cases. For more information, see Using the Apply Immediately setting in the Amazon RDS User Guide.
- On the confirmation page, review your changes. If they are correct, choose Modify DB Instance to save your changes.
AWS CLI
The same applies to the AWS CLI. Enable multi Az support by providing the --multi-az
option.
modify-db-instance
--db-instance-identifier "test-rds" \
--multi-az
supported assets and scan settings.
Learn about this finding type'sRedshift Cluster Configuration Check
Category name in the API: REDSHIFT_CLUSTER_CONFIGURATION_CHECK
This checks for essential elements of a Redshift cluster: encryption at rest, logging and node type.
These configuration items are important in the maintenance of a secure and observable Redshift cluster.
Recommendation: Checks that all Redshift clusters have encryption at rest, logging and node type. To remediate this finding, complete the following steps:Terraform
resource "aws_kms_key" "redshift_encryption" {
description = "Used for Redshift encryption configuration"
enable_key_rotation = true
}
resource "aws_redshift_cluster" "example" {
# ... other configuration ...
encrypted = true
kms_key_id = aws_kms_key.redshift_encryption.id
logging {
enable = true
log_destination_type = "cloudwatch"
log_exports = ["connectionlog", "userlog", "useractivitylog"]
}
}
AWS console
To enable cluster audit logging
- Open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.
- In the navigation menu, choose Clusters, then choose the name of the cluster to modify.
- Choose Properties.
- Choose Edit and Edit audit logging.
- Set Configure audit logging to Turn on, set Log export type to CloudWatch (recommended) and choose the logs to export.
In order to use AWS S3 to manage Redshift audit logs, see Redshift - Database audit logging on the AWS Documentation.
- Choose Save changes.
To modify database encryption on a cluster
- Sign in to the AWS Management Console and open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.
- On the navigation menu, choose Clusters, then choose the cluster that you want to modify encryption.
- Choose Properties.
- Choose Edit and Edit encryption.
- Choose the Encryption to use (KMS or HSM) and provide:
- For KMS: key to use
- For HSM: connection and client certificate
AWS CLI
- Create a KMS key and retrieve the key id
aws kms create-key \
--description "Key to encrypt Redshift Clusters"
- Modify the cluster
aws redshift modify-cluster \
--cluster-identifiers "test-redshift-cluster" \
--encrypted \
--kms-key-id <value>
supported assets and scan settings.
Learn about this finding type'sRedshift Cluster Maintenancesettings Check
Category name in the API: REDSHIFT_CLUSTER_MAINTENANCESETTINGS_CHECK
Automatic major version upgrades happen according to the maintenance window
Recommendation: Checks that all Redshift clusters have allowVersionUpgrade enabled and preferredMaintenanceWindow and automatedSnapshotRetentionPeriod set To remediate this finding, complete the following steps:Terraform
This check is compliant with all the default values provided by Terraform. In case of a failing Redshift Cluster, review the requirements and remove the default overrides for the following attributes of the aws_redshift_cluster
resource.
resource "aws_redshift_cluster" "example" {
# ...other configuration ...
# The following values are compliant and set by default if omitted.
allow_version_upgrade = true
preferred_maintenance_window = "sat:10:00-sat:10:30"
automated_snapshot_retention_period = 1
}
AWS console
When creating a Redshift cluster via the AWS console, the default values are already compliant with this control.
For more information, see Managing clusters using the console
AWS CLI
To remediate this control using the AWS CLI, please do as follows:
aws redshift modify-cluster \
--cluster-identifier "test-redshift-cluster" \
--allow-version-upgrade
supported assets and scan settings.
Learn about this finding type'sRedshift Cluster Public Access Check
Category name in the API: REDSHIFT_CLUSTER_PUBLIC_ACCESS_CHECK
The PubliclyAccessible attribute of the Amazon Redshift cluster configuration indicates whether the cluster is publicly accessible. When the cluster is configured with PubliclyAccessible set to true, it is an Internet-facing instance that has a publicly resolvable DNS name, which resolves to a public IP address.
When the cluster is not publicly accessible, it is an internal instance with a DNS name that resolves to a private IP address. Unless you intend for your cluster to be publicly accessible, the cluster should not be configured with PubliclyAccessible set to true.
Recommendation: Checks whether Redshift clusters are publicly accessible To remediate this finding, complete the following steps:Terraform
To remediate this control, it is necessary to modify the redshift cluster resource and set publicly_accessible
to false
, the default value is true
.
resource "aws_redshift_cluster" "example" {
# ... other configuration ...
publicly_accessible = false
}
AWS console
To disable public access to an Amazon Redshift cluster
- Open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.
- In the navigation menu, choose Clusters, then choose the name of the cluster with the security group to modify.
- Choose Actions, then choose Modify publicly accessible setting.
- Under Allow instances and devices outside the VPC to connect to your database through the cluster endpoint, choose No.
- Choose Confirm.
AWS CLI
Use the modify-cluster
command to set --no-publicly-accessible
.
aws redshift modify-cluster \
--cluster-identifier "test-redshift-cluster" \
--no-publicly-accessible
supported assets and scan settings.
Learn about this finding type'sRestricted Common Ports
Category name in the API: RESTRICTED_COMMON_PORTS
This checks whether unrestricted incoming traffic for the security groups is accessible to the specified ports that have the highest risk. This control fails if any of the rules in a security group allow ingress traffic from '0.0.0.0/0' or '::/0' for those ports.
Unrestricted access (0.0.0.0/0) increases opportunities for malicious activity, such as hacking, denial-of-service attacks, and loss of data.
Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. No security group should allow unrestricted ingress access to the following ports:
- 20, 21 (FTP)
- 22 (SSH)
- 23 (Telnet)
- 25 (SMTP)
- 110 (POP3)
- 135 (RPC)
- 143 (IMAP)
- 445 (CIFS)
- 1433, 1434 (MSSQL)
- 3000 (Go, Node.js, and Ruby web development frameworks)
- 3306 (mySQL)
- 3389 (RDP)
- 4333 (ahsp)
- 5000 (Python web development frameworks)
- 5432 (postgresql)
- 5500 (fcp-addr-srvr1)
- 5601 (OpenSearch Dashboards)
- 8080 (proxy)
- 8088 (legacy HTTP port)
- 8888 (alternative HTTP port)
- 9200 or 9300 (OpenSearch)
AWS console
To delete a security group rule:
- Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
- In the navigation pane, choose Security Groups.
- Select the security group to update, choose Actions, and then choose Edit inbound rules to remove an inbound rule or Edit outbound rules to remove an outbound rule.
- Choose the Delete button to the right of the rule to delete.
- Choose Preview changes, Confirm.
For information on how to delete rules from a security group, see Configure security group rules in the Amazon EC2 User Guide.
supported assets and scan settings.
Learn about this finding type'sRestricted Ssh
Category name in the API: RESTRICTED_SSH
Security groups provide stateful filtering of ingress and egress network traffic to AWS resources.
CIS recommends that no security group allow unrestricted ingress access to port 22. Removing unfettered connectivity to remote console services, such as SSH, reduces a server's exposure to risk.
Recommendation: Security groups should not allow ingress from 0.0.0.0/0 to port 22 To remediate this finding, complete the following steps:AWS console
Perform the following steps for each security group associated with a VPC.
Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
- In the left pane, choose Security groups.
- Select a security group.
- In the bottom section of the page, choose the Inbound Rules tab.
- Choose Edit rules.
- Identify the rule that allows access through port 22 and then choose the X to remove it.
- Choose Save rules.
supported assets and scan settings.
Learn about this finding type'sRotation Customer Created Cmks Enabled
Category name in the API: ROTATION_CUSTOMER_CREATED_CMKS_ENABLED
Checks if automatic key rotation is enabled for each key and matches to the key ID of the customer created AWS KMS key. The rule is NON_COMPLIANT if the AWS Config recorder role for a resource does not have the kms:DescribeKey permission.
Recommendation: Ensure rotation for customer created CMKs is enabledTo enable automation key rotation for AWS KMS see Rotating AWS KMS keys in the AWS documentation.
supported assets and scan settings.
Learn about this finding type'sRotation Customer Created Symmetric Cmks Enabled
Category name in the API: ROTATION_CUSTOMER_CREATED_SYMMETRIC_CMKS_ENABLED
AWS Key Management Service (KMS) allows customers to rotate the backing key which is key material stored within the KMS which is tied to the key ID of the Customer Created customer master key (CMK). It is the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It is recommended that CMK key rotation be enabled for symmetric keys. Key rotation can not be enabled for any asymmetric CMK.
Recommendation: Ensure rotation for customer created symmetric CMKs is enabled To remediate this finding, complete the following steps:AWS console
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam.
- In the left navigation pane, choose
Customer managed keys
. - Select a customer managed CMK where
Key spec = SYMMETRIC_DEFAULT
- Underneath the "General configuration" panel open the tab "Key rotation"
- Check the "Automatically rotate this KMS key every year." checkbox
AWS CLI
- Run the following command to enable key rotation:
aws kms enable-key-rotation --key-id <kms_key_id>
supported assets and scan settings.
Learn about this finding type'sRouting Tables Vpc Peering Are Least Access
Category name in the API: ROUTING_TABLES_VPC_PEERING_ARE_LEAST_ACCESS
Checks if route tables for VPC peering are configure with the principal of least privileged.
Recommendation: Ensure routing tables for VPC peering are "least access"To update route tables for VPC peering see Update your route tables for a VPC peering connection in the AWS VPC user guide.
supported assets and scan settings.
Learn about this finding type'sS3 Account Level Public Access Blocks
Category name in the API: S3_ACCOUNT_LEVEL_PUBLIC_ACCESS_BLOCKS
Amazon S3 Block Public Access provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects do not allow public access.
Recommendation: Checks if the required S3 public access block settings are configured from account level To remediate this finding, complete the following steps:Terraform
The following Terraform resource configures account level access to S3.
resource "aws_s3_account_public_access_block" "s3_control" {
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
AWS console
To edit block public access settings for all the S3 buckets in an AWS account.
- Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
- Choose Block Public Access settings for this account.
- Choose Edit to change the block public access settings for all the buckets in your AWS account.
- Choose the settings that you want to change, and then choose Save changes.
- When you're asked for confirmation, enter confirm. Then choose Confirm to save your changes.
AWS CLI
aws s3control put-public-access-block \
--account-id <value> \
--public-access-block-configuration '{"BlockPublicAcls": true, "BlockPublicPolicy": true, "IgnorePublicAcls": true, "RestrictPublicBuckets": true}'
supported assets and scan settings.
Learn about this finding type'sS3 Buckets Configured Block Public Access Bucket And Account Settings
Category name in the API: S3_BUCKETS_CONFIGURED_BLOCK_PUBLIC_ACCESS_BUCKET_AND_ACCOUNT_SETTINGS
Amazon S3 provides Block public access (bucket settings)
and Block public access (account settings)
to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an AWS IAM principal with sufficient S3 permissions can enable public access at the bucket or object level. While enabled, Block public access (bucket settings)
prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, Block public access (account settings)
prevents all buckets, and contained objects, from becoming publicly accessible across the entire account.
Ensure that S3 buckets are configured with Block public access (bucket settings)
.
AWS console
If the output reads true for the separate configuration settings then it is set on the account.
- Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/
- Choose Block Public Access (account settings)
- Choose Edit to change the block public access settings for all the buckets in your AWS account
- Choose the settings you want to change, and then choose Save. For details about each setting, pause on the i icons.
- When you're asked for confirmation, enter confirm. Then Click Confirm to save your changes.
AWS CLI
To set Block Public access settings for this account, run the following command:
aws s3control put-public-access-block
--public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true
--account-id <value>
supported assets and scan settings.
Learn about this finding type'sS3 Bucket Access Logging Enabled Cloudtrail S3 Bucket
Category name in the API: S3_BUCKET_ACCESS_LOGGING_ENABLED_CLOUDTRAIL_S3_BUCKET
S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket.
Recommendation:Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket
To remediate this finding, complete the following steps:AWS console
- Sign in to the AWS Management Console and open the S3 console at https://console.aws.amazon.com/s3.
- Under All Buckets click on the target S3 bucket
- Click on Properties in the top right of the console
- Under Bucket:
click on Logging - Configure bucket logging
- Click on the Enabled checkbox
- Select Target Bucket from list
- Enter a Target Prefix - Click Save.
AWS CLI
- Get the name of the S3 bucket that CloudTrail is logging to:
aws cloudtrail describe-trails --region <region-name> --query trailList[*].S3BucketName
- Copy and add target bucket name at
, Prefix for logfile at and optionally add an email address in the following template and save it as :
{
"LoggingEnabled": {
"TargetBucket": "<Logging_BucketName>",
"TargetPrefix": "<LogFilePrefix>",
"TargetGrants": [
{
"Grantee": {
"Type": "AmazonCustomerByEmail",
"EmailAddress": "<EmailID>"
},
"Permission": "FULL_CONTROL"
}
]
}
}
- Run the put-bucket-logging command with bucket name and
as input, for more information refer at put-bucket-logging:
aws s3api put-bucket-logging --bucket <BucketName> --bucket-logging-status file://<FileName.Json>
supported assets and scan settings.
Learn about this finding type'sS3 Bucket Logging Enabled
Category name in the API: S3_BUCKET_LOGGING_ENABLED
AWS S3 Server Access Logging feature records access requests to storage buckets which is useful for security audits. By default, server access logging is not enabled for S3 buckets.
Recommendation: Checks if logging is enabled on all S3 buckets To remediate this finding, complete the following steps:Terraform
The following example demonstrates how to create 2 buckets:
- A logging bucket
- A compliant bucket
variable "bucket_acl_map" {
type = map(any)
default = {
"logging-bucket" = "log-delivery-write"
"compliant-bucket" = "private"
}
}
resource "aws_s3_bucket" "all" {
for_each = var.bucket_acl_map
bucket = each.key
object_lock_enabled = true
tags = {
"Pwd" = "s3"
}
}
resource "aws_s3_bucket_acl" "private" {
for_each = var.bucket_acl_map
bucket = each.key
acl = each.value
}
resource "aws_s3_bucket_versioning" "enabled" {
for_each = var.bucket_acl_map
bucket = each.key
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_logging" "enabled" {
for_each = var.bucket_acl_map
bucket = each.key
target_bucket = aws_s3_bucket.all["logging-bucket"].id
target_prefix = "log/"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
for_each = var.bucket_acl_map
bucket = each.key
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
AWS console
For information on how to enabled S3 access logging via the AWS console see Enabling Amazon S3 server access logging in the AWS documentation.
AWS CLI
The following example demonstrates how to:
- Create a bucket policy to grant the logging service principal permission to
PutObject
in your logging bucket.
policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3ServerAccessLogsPolicy",
"Effect": "Allow",
"Principal": {"Service": "logging.s3.amazonaws.com"},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MyBucket/Logs/*",
"Condition": {
"ArnLike": {"aws:SourceARN": "arn:aws:s3:::SOURCE-BUCKET-NAME"},
"StringEquals": {"aws:SourceAccount": "SOURCE-AWS-ACCOUNT-ID"}
}
}
]
}
aws s3api put-bucket-policy \
--bucket my-bucket
--policy file://policy.json
- Apply the policy to your logging bucket
logging.json
{
"LoggingEnabled": {
"TargetBucket": "MyBucket",
"TargetPrefix": "Logs/"
}
}
aws s3api put-bucket-logging \
--bucket MyBucket \
--bucket-logging-status file://logging.json
supported assets and scan settings.
Learn about this finding type'sS3 Bucket Policy Set Deny Http Requests
Category name in the API: S3_BUCKET_POLICY_SET_DENY_HTTP_REQUESTS
At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS.
Recommendation: Ensure S3 Bucket Policy is set to deny HTTP requests To remediate this finding, complete the following steps:AWS console
using AWS Policy Generator:
- Repeat steps 1-4 above.
- Click on
Policy Generator
at the bottom of the Bucket Policy Editor - Select Policy Type
S3 Bucket Policy
- Add Statements
-Effect
= Deny
-Principal
= *
-AWS Service
= Amazon S3
-Actions
= *
-Amazon Resource Name
= - Generate Policy
- Copy the text and add it to the Bucket Policy.
AWS CLI
- Export the bucket policy to a json file.
aws s3api get-bucket-policy --bucket <bucket_name> --query Policy --output text > policy.json
- Modify the policy.json file by adding in this statement:
{
"Sid": <optional>",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::<bucket_name>/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
- Apply this modified policy back to the S3 bucket:
aws s3api put-bucket-policy --bucket <bucket_name> --policy file://policy.json
supported assets and scan settings.
Learn about this finding type'sS3 Bucket Replication Enabled
Category name in the API: S3_BUCKET_REPLICATION_ENABLED
This control checks whether an Amazon S3 bucket has Cross-Region Replication enabled. The control fails if the bucket doesn't have Cross-Region Replication enabled or if Same-Region Replication is also enabled.
Replication is the automatic, asynchronous copying of objects across buckets in the same or different AWS Regions. Replication copies newly created objects and object updates from a source bucket to a destination bucket or buckets. AWS best practices recommend replication for source and destination buckets that are owned by the same AWS account. In addition to availability, you should consider other systems hardening settings.
Recommendation: Checks whether S3 buckets have cross-region replication enabledTo enable Cross-Region Replication on an S3 bucket, see Configuring replication for source and destination buckets owned by the same account in the Amazon Simple Storage Service User Guide. For Source bucket, choose Apply to all objects in the bucket.
supported assets and scan settings.
Learn about this finding type'sS3 Bucket Server Side Encryption Enabled
Category name in the API: S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED
This checks that your S3 bucket either has Amazon S3 default encryption enabled or that the S3 bucket policy explicitly denies put-object requests without server-side encryption.
Recommendation: Ensure all S3 buckets employ encryption-at-rest To remediate this finding, complete the following steps:Terraform
resource "aws_s3_bucket_server_side_encryption_configuration" "enable" {
bucket = "my-bucket"
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
AWS console
To enable default encryption on an S3 bucket
- Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
- In the left navigation pane, choose Buckets.
- Choose the S3 bucket from the list.
- Choose Properties.
- Choose Default encryption.
- For the encryption, choose either AES-256 or AWS-KMS.
- Choose AES-256 to use keys that are managed by Amazon S3 for default encryption. For more information about using Amazon S3 server-side encryption to encrypt your data, see the Amazon Simple Storage Service User Guide.
- Choose AWS-KMS to use keys that are managed by AWS KMS for default encryption. Then choose a master key from the list of the AWS KMS master keys that you have created.
- Type the Amazon Resource Name (ARN) of the AWS KMS key to use. You can find the ARN for your AWS KMS key in the IAM console, under Encryption keys. Or, you can choose a key name from the drop-down list.
- Important: if you use the AWS KMS option for your default encryption configuration, you are subject to the RPS (requests per second) quotas of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see the AWS Key Management Service Developer Guide.
- Choose Save.
For more information about creating an AWS KMS key, see the AWS Key Management Service Developer Guide.
For more information about using AWS KMS with Amazon S3, see the Amazon Simple Storage Service User Guide.
When enabling default encryption, you might need to update your bucket policy. For more information about moving from bucket policies to default encryption, see the Amazon Simple Storage Service User Guide.
AWS CLI
aws s3api put-bucket-encryption \
--bucket my-bucket \
--server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
supported assets and scan settings.
Learn about this finding type'sS3 Bucket Versioning Enabled
Category name in the API: S3_BUCKET_VERSIONING_ENABLED
Amazon S3 is a means of keeping multiple variants of an object in the same bucket and can help you to recover more easily from both unintended user actions and application failures.
Recommendation: Checks that versioning is enabled for all S3 buckets To remediate this finding, complete the following steps:Terraform
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
versioning {
enabled = true
}
}
AWS console
To enable or disable versioning on an S3 bucket
- Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
- In the Buckets list, choose the name of the bucket that you want to enable versioning for.
- Choose Properties.
- Under Bucket Versioning, choose Edit.
- Choose Suspend or Enable, and then choose Save changes.
AWS CLI
aws s3control put-bucket-versioning \
--bucket <bucket_name> \
--versioning-configuration Status=Enabled
supported assets and scan settings.
Learn about this finding type'sS3 Default Encryption Kms
Category name in the API: S3_DEFAULT_ENCRYPTION_KMS
Checks whether the Amazon S3 buckets are encrypted with AWS Key Management Service (AWS KMS)
Recommendation: Checks that all buckets are encrypted with KMS To remediate this finding, complete the following steps:Terraform
resource "aws_kms_key" "s3_encryption" {
description = "Used for S3 Bucket encryption configuration"
enable_key_rotation = true
}
resource "aws_s3_bucket_server_side_encryption_configuration" "enable" {
bucket = "my-bucket"
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.s3_encryption.arn
sse_algorithm = "aws:kms"
}
}
}
AWS console
To enable default encryption on an S3 bucket
- Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
- In the left navigation pane, choose Buckets.
- Choose the S3 bucket from the list.
- Choose Properties.
- Choose Default encryption.
- For the encryption, choose AWS-KMS.
- Choose AWS-KMS to use keys that are managed by AWS KMS for default encryption. Then choose a master key from the list of the AWS KMS master keys that you have created. For more information on how to create KMS keys, see the AWS Documentation - Creating Keys
- Type the Amazon Resource Name (ARN) of the AWS KMS key to use. You can find the ARN for your AWS KMS key in the IAM console, under Encryption keys. Or, you can choose a key name from the drop-down list.
- Important: this solution is subject to the RPS (requests per second) quotas of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see the AWS Key Management Service Developer Guide.
- Choose Save.
For more information about using AWS KMS with Amazon S3, see the Amazon Simple Storage Service User Guide.
When enabling default encryption, you might need to update your bucket policy. For more information about moving from bucket policies to default encryption, see the Amazon Simple Storage Service User Guide.
AWS CLI
Create a KMS key
aws kms create-key \
--description "Key to encrypt S3 buckets"
Enable key rotation
aws kms enable-key-rotation \
--key-id <key_id_from_previous_command>
Update bucket
aws s3api put-bucket-encryption \
--bucket my-bucket \
--server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"KMSMasterKeyID": "<id_from_key>", "SSEAlgorithm": "AES256"}}]}'
supported assets and scan settings.
Learn about this finding type'sSagemaker Notebook Instance Kms Key Configured
Category name in the API: SAGEMAKER_NOTEBOOK_INSTANCE_KMS_KEY_CONFIGURED
Checks if an AWS Key Management Service (AWS KMS) key is configured for an Amazon SageMaker notebook instance. The rule is NON_COMPLIANT if 'KmsKeyId' is not specified for the SageMaker notebook instance.
Recommendation: Checks that all SageMaker notebook instances are configured to use KMSTo configure KMS for SageMaker see Key Management in the Amazon SageMaker documentation.
supported assets and scan settings.
Learn about this finding type'sSagemaker Notebook No Direct Internet Access
Category name in the API: SAGEMAKER_NOTEBOOK_NO_DIRECT_INTERNET_ACCESS
Checks whether direct internet access is disabled for an SageMaker notebook instance. To do this, it checks whether the DirectInternetAccess field is disabled for the notebook instance.
If you configure your SageMaker instance without a VPC, then by default direct internet access is enabled on your instance. You should configure your instance with a VPC and change the default setting to Disable—Access the internet through a VPC.
To train or host models from a notebook, you need internet access. To enable internet access, make sure that your VPC has a NAT gateway and your security group allows outbound connections. To learn more about how to connect a notebook instance to resources in a VPC, see Connect a notebook instance to resources in a VPC in the Amazon SageMaker Developer Guide.
You should also ensure that access to your SageMaker configuration is limited to only authorized users. Restrict users' IAM permissions to modify SageMaker settings and resources.
Recommendation: Checks whether direct internet access is disabled for all Amazon SageMaker notebook instance To remediate this finding, complete the following steps:AWS console
Note that you cannot change the internet access setting after a notebook instance is created. It must be stopped, deleted, and recreated.
To configure an SageMaker notebook instance to deny direct internet access:
- Open the SageMaker console at https://console.aws.amazon.com/sagemaker/
- Navigate to Notebook instances.
- Delete the instance that has direct internet access enabled. Choose the instance, choose Actions, then choose stop.
- After the instance is stopped, choose Actions, then choose delete.
- Choose Create notebook instance. Provide the configuration details.
- Expand the network section, then choose a VPC, subnet, and security group. Under Direct internet access, choose Disable—Access the internet through a VPC.
- Choose Create notebook instance.
For more information, see Connect a notebook instance to resources in a VPC in the Amazon SageMaker Developer Guide.
supported assets and scan settings.
Learn about this finding type'sSecretsmanager Rotation Enabled Check
Category name in the API: SECRETSMANAGER_ROTATION_ENABLED_CHECK
Checks whether a secret stored in AWS Secrets Manager is configured with automatic rotation. The control fails if the secret isn't configured with automatic rotation. If you provide a custom value for the maximumAllowedRotationFrequency
parameter, the control passes only if the secret is automatically rotated within the specified window of time.
Secrets Manager helps you improve the security posture of your organization. Secrets include database credentials, passwords, and third-party API keys. You can use Secrets Manager to store secrets centrally, encrypt secrets automatically, control access to secrets, and rotate secrets safely and automatically.
Secrets Manager can rotate secrets. You can use rotation to replace long-term secrets with short-term ones. Rotating your secrets limits how long an unauthorized user can use a compromised secret. For this reason, you should rotate your secrets frequently. To learn more about rotation, see Rotating your AWS Secrets Manager secrets in the AWS Secrets Manager User Guide.
Recommendation: Checks that all AWS Secrets Manager secrets have rotation enabledTo turn on automatic rotation for Secrets Manager secrets, see Set up automatic rotation for AWS Secrets Manager secrets using the console in the AWS Secrets Manager User Guide. You must choose and configure an AWS Lambda function for rotation.
supported assets and scan settings.
Learn about this finding type'sSns Encrypted Kms
Category name in the API: SNS_ENCRYPTED_KMS
Checks whether an SNS topic is encrypted at rest using AWS KMS. The controls fails if an SNS topic doesn't use a KMS key for server-side encryption (SSE).
Encrypting data at rest reduces the risk of data stored on disk being accessed by a user not authenticated to AWS. It also adds another set of access controls to limit the ability of unauthorized users to access the data. For example, API permissions are required to decrypt the data before it can be read. SNS topics should be encrypted at-rest for an added layer of security.
Recommendation: Checks that all SNS topics are encrypted with KMSTo turn on SSE for an SNS topic, see Enabling server-side encryption (SSE) for an Amazon SNS topic in the Amazon Simple Notification Service Developer Guide. Before you can use SSE, you must also configure AWS KMS key policies to allow encryption of topics and encryption and decryption of messages. For more information, see Configuring AWS KMS permissions in the Amazon Simple Notification Service Developer Guide.
supported assets and scan settings.
Learn about this finding type'sVpc Default Security Group Closed
Category name in the API: VPC_DEFAULT_SECURITY_GROUP_CLOSED
This control checks whether the default security group of a VPC allows inbound or outbound traffic. The control fails if the security group allows inbound or outbound traffic.
The rules for the default security group allow all outbound and inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group. We recommend that you don't use the default security group. Because the default security group cannot be deleted, you should change the default security group rules setting to restrict inbound and outbound traffic. This prevents unintended traffic if the default security group is accidentally configured for resources such as EC2 instances.
Recommendation: Ensure the default security group of every VPC restricts all trafficTo remediate this issue, start by creating new least-privilege security groups. For instructions, see Create a security group in the Amazon VPC User Guide. Then, assign the new security groups to your EC2 instances. For instructions, see Change an instance's security group in the Amazon EC2 User Guide for Linux Instances.
After you assign the new security groups to your resources, remove all inbound and outbound rules from the default security groups. For instructions, see Delete security group rules in the Amazon VPC User Guide.
supported assets and scan settings.
Learn about this finding type'sVpc Flow Logging Enabled All Vpcs
Category name in the API: VPC_FLOW_LOGGING_ENABLED_ALL_VPCS
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet "Rejects" for VPCs.
Recommendation: Ensure VPC flow logging is enabled in all VPCs To remediate this finding, complete the following steps:AWS console
- Sign into the management console
- Select
Services
thenVPC
- In the left navigation pane, select
Your VPCs
- Select a VPC
- In the right pane, select the
Flow Logs
tab. - If no Flow Log exists, click
Create Flow Log
- For Filter, select
Reject
- Enter in a
Role
andDestination Log Group
- Click
Create Log Flow
- Click on
CloudWatch Logs Group
Note: Setting the filter to "Reject" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to "All" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment.
AWS CLI
- Create a policy document and name it as
role_policy_document.json
and paste the following content:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "test",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
- Create another policy document and name it as
iam_policy.json
and paste the following content:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action":[
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "*"
}
]
}
- Run the below command to create an IAM role:
aws iam create-role --role-name <aws_support_iam_role> --assume-role-policy-document file://<file-path>role_policy_document.json
- Run the below command to create an IAM policy:
aws iam create-policy --policy-name <ami-policy-name> --policy-document file://<file-path>iam-policy.json
- Run
attach-group-policy
command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned):
aws iam attach-group-policy --policy-arn arn:aws:iam::<aws-account-id>:policy/<iam-policy-name> --group-name <group-name>
- Run
describe-vpcs
to get the VpcId available in the selected region:
aws ec2 describe-vpcs --region <region>
- The command output should return the VPC Id available in the selected region.
- Run
create-flow-logs
to create a flow log for the vpc:
aws ec2 create-flow-logs --resource-type VPC --resource-ids <vpc-id> --traffic-type REJECT --log-group-name <log-group-name> --deliver-logs-permission-arn <iam-role-arn>
- Repeat step 8 for other vpcs available in the selected region.
- Change the region by updating --region and repeat remediation procedure for other vpcs.
supported assets and scan settings.
Learn about this finding type'sVpc Sg Open Only To Authorized Ports
Category name in the API: VPC_SG_OPEN_ONLY_TO_AUTHORIZED_PORTS
This control checks whether an Amazon EC2 security group permits unrestricted incoming traffic from unauthorized ports. The control status is determined as follows:
If you use the default value for authorizedTcpPorts, the control fails if the security group permits unrestricted incoming traffic from any port other than ports 80 and 443.
If you provide custom values for authorizedTcpPorts or authorizedUdpPorts, the control fails if the security group permits unrestricted incoming traffic from any unlisted port.
If no parameter is used, the control fails for any security group that has an unrestricted inbound traffic rule.
Security groups provide stateful filtering of ingress and egress network traffic to AWS. Security group rules should follow the principal of least privileged access. Unrestricted access (IP address with a /0 suffix) increases the opportunity for malicious activity such as hacking, denial-of-service attacks, and loss of data. Unless a port is specifically allowed, the port should deny unrestricted access.
Recommendation: Checks that any security group with 0.0.0.0/0 of any VPC allows only specific inbound TCP/UDP trafficTo modify a security group, see Work with security groups in the Amazon VPC User Guide.
supported assets and scan settings.
Learn about this finding type'sBoth VPC VPN Tunnels Up
Category name in the API: VPC_VPN_2_TUNNELS_UP
A VPN tunnel is an encrypted link where data can pass from the customer network to or from AWS within an AWS Site-to-Site VPN connection. Each VPN connection includes two VPN tunnels which you can simultaneously use for high availability. Ensuring that both VPN tunnels are up for a VPN connection is important for confirming a secure and highly available connection between an AWS VPC and your remote network.
This control checks that both VPN tunnels provided by AWS Site-to-Site VPN are in UP status. The control fails if one or both tunnels are in DOWN status.
Recommendation: Checks that both AWS VPN tunnels provided by AWS site-to-site are in UP statusTo modify VPN tunnel options, see Modifying Site-to-Site VPN tunnel options in the AWS Site-to-Site VPN User Guide.
supported assets and scan settings.
Learn about this finding type's