This guide shows you how to configure resources for the Cloud Storage FUSE CSI driver sidecar container, including setting up a private image, custom write buffer, and custom read cache volume. Typically, you don't need to change these settings.
The Cloud Storage FUSE CSI driver uses a customizable sidecar container to efficiently mount and access Cloud Storage buckets. By configuring the sidecar, you can fine-tune application performance and resource usage, which can lead to faster data access, quicker processing times, and potentially lower overall resource consumption for your application.
This guide is for Developers and Admins and architects who want to optimize the performance, security, and efficiency of their applications that interact with GKE.
Before reading this page, ensure you're familiar with the basics of Cloud Storage, Kubernetes, and containerization concepts.
How the sidecar container works
The Cloud Storage FUSE CSI driver uses a sidecar container to mount
Cloud Storage buckets to make them accessible as local file systems to
Kubernetes applications. This sidecar container, named gke-gcsfuse-sidecar
,
runs alongside the workload container within the same Pod. When the driver
detects the gke-gcsfuse/volumes: "true"
annotation in a Pod specification, it
automatically injects the sidecar container. This sidecar container approach
helps to ensure security and manage resources effectively.
The sidecar container handles the complexities of mounting the Cloud Storage
buckets and provides file system access to the applications without requiring
you to manage the Cloud Storage FUSE runtime directly. You can configure
resource limits for the sidecar container using annotations like
gke-gcsfuse/cpu-limit
and gke-gcsfuse/memory-limit
. The sidecar container
model also ensures that the Cloud Storage FUSE instance is tied to the workload
lifecycle, preventing it from consuming resources unnecessarily. This means the
sidecar container terminates automatically when the workload containers exit,
especially in Job workloads or Pods with a RestartPolicy
of Never
.
Istio compatibility
The Cloud Storage FUSE CSI driver's sidecar container and Istio can coexist and run concurrently in your Pod. Istio's sidecar proxy manages network traffic while the CSI sidecar optimizes storage access, allowing your applications to efficiently interact with Google Cloud Storage with improved performance and observability.
Configure custom write buffer
Cloud Storage FUSE stages writes in a local directory, and then uploads to
Cloud Storage on close
or fsync
operations.
This section describes how to configure a custom buffer volume for
Cloud Storage FUSE write buffering. This scenario might apply if you need to
replace the default emptyDir
volume for Cloud Storage FUSE to stage the files in
write operations. This is useful if you need to write files larger than
10 GiB on Autopilot clusters.
You can specify any type of storage supported by the Cloud Storage FUSE CSI driver for file caching, such as a Local SSD, Persistent Disk-based storage, and RAM disk (memory). GKE will use the specified volume for file write buffering. To learn more about these options, see Select the storage for backing your file cache.
To use the custom buffer volume, you must specify a non-zero fsGroup
.
The following example shows how you can use a predefined PersistentVolumeClaim as the buffer volume:
apiVersion: v1
kind: Pod
metadata:
annotations:
gke-gcsfuse/volumes: "true"
spec:
securityContext:
fsGroup: FS_GROUP
containers:
...
volumes:
- name: gke-gcsfuse-buffer
persistentVolumeClaim:
claimName: BUFFER_VOLUME_PVC
Replace the following:
- FS_GROUP: the fsGroup ID.
- BUFFER_VOLUME_PVC: the predefined PVC name.
Configure custom read cache volume
This section describes how to configure a custom cache volume for Cloud Storage FUSE read caching.
This scenario might apply if you need to replace
the default emptyDir
volume for Cloud Storage FUSE to cache the files in read
operations. You can specify any type of storage supported by GKE,
such as a PersistentVolumeClaim, and GKE will use the
specified volume for file caching. This is useful if you need to cache files
larger than 10 GiB on Autopilot clusters. To use the custom cache
volume, you must specify a non-zero fsGroup
.
The following example shows how you can use a predefined PersistentVolumeClaim as the cache volume:
apiVersion: v1
kind: Pod
metadata:
annotations:
gke-gcsfuse/volumes: "true"
spec:
securityContext:
fsGroup: FS_GROUP
containers:
...
volumes:
- name: gke-gcsfuse-cache
persistentVolumeClaim:
claimName: CACHE_VOLUME_PVC
Replace the following:
FS_GROUP
: thefsGroup
ID.CACHE_VOLUME_PVC
: the predefined PersistentVolumeClaim name.
Configure a private image for the sidecar container
This section describes how to use the sidecar container image if you are hosting it in a private container registry. This scenario might apply if you need to use private nodes for security purposes.
To configure and consume the private sidecar container image, follow these steps:
- Refer to this GKE compatibility table to find a compatible public sidecar container image.
- Pull it to your local environment and push it to your private container registry.
In the manifest, specify a container named
gke-gcsfuse-sidecar
with only the image field. GKE will use the specified sidecar container image to prepare for the sidecar container injection.Here is an example:
apiVersion: v1 kind: Pod metadata: annotations: gke-gcsfuse/volumes: "true" spec: containers: - name: gke-gcsfuse-sidecar image: PRIVATE_REGISTRY/gcs-fuse-csi-driver-sidecar-mounter:PRIVATE_IMAGE_TAG - name: main # your main workload container.
Replace the following:
PRIVATE_REGISTRY
: your private container registry.PRIVATE_IMAGE_TAG
: your private sidecar container image tag.
Configure sidecar container resources
By default, the sidecar container is configured with the following resource requests, with resource limits unset (for Standard cluster):
- 250m CPU
- 256 MiB memory
- 5 GiB ephemeral storage
To overwrite these values, you can optionally specify the annotation
gke-gcsfuse/[cpu-limit|memory-limit|ephemeral-storage-limit|cpu-request|memory-request|ephemeral-storage-request]
as shown in the following example:
apiVersion: v1
kind: Pod
metadata:
annotations:
gke-gcsfuse/volumes: "true"
gke-gcsfuse/cpu-limit: "10"
gke-gcsfuse/memory-limit: 10Gi
gke-gcsfuse/ephemeral-storage-limit: 1Ti
gke-gcsfuse/cpu-request: 500m
gke-gcsfuse/memory-request: 1Gi
gke-gcsfuse/ephemeral-storage-request: 50Gi
You can use value "0"
to unset any resource limits or requests on
Standard clusters. For example, annotation gke-gcsfuse/memory-limit: "0"
leaves the sidecar container memory limit empty with the default memory request.
This is useful when you cannot decide on the amount of resources Cloud Storage FUSE
needs for your workloads, and want to let Cloud Storage FUSE consume all the
available resources on a node. After calculating the resource requirements
for Cloud Storage FUSE based on your workload metrics, you can set appropriate
limits.
What's next
- Learn how to optimize performance for the Cloud Storage FUSE CSI driver.
- Explore additional samples for using the CSI driver on GitHub.
- Learn more about Cloud Storage FUSE.