Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Dokumen ini berisi daftar error yang mungkin Anda alami saat menggunakan disk dengan antarmuka nonvolatil memory Express (NVMe).
Anda dapat menggunakan antarmuka NVMe untuk SSD Lokal dan persistent disk (Persistent Disk atau Google Cloud Hyperdisk). Hanya
seri mesin terbaru, seperti Tau T2A, M3, C3, C3D, dan H3 yang menggunakan
antarmuka NVMe untuk
Persistent Disk. Confidential VMs juga menggunakan NVMe untuk Persistent Disk. Semua seri mesin Compute Engine lainnya menggunakan antarmuka disk SCSI untuk persistent disk.
Error waktu tunggu operasi I/O habis
Jika Anda mengalami error waktu tunggu I/O, latensi dapat melebihi parameter waktu tunggu default untuk operasi I/O yang dikirimkan ke perangkat NVMe.
Jika ingin meningkatkan parameter waktu tunggu untuk operasi I/O yang dikirim ke perangkat
NVMe, tambahkan baris berikut ke
file /lib/udev/rules.d/65-gce-disk-naming.rules, lalu mulai ulang VM:
Disk yang dilepas masih muncul di sistem operasi instance komputasi
Pada VM yang menggunakan kernel Linux versi 6.0 hingga 6.2, operasi yang melibatkan metode Compute Engine API instances.detachDisk atau perintah gcloud compute instances detach-disk mungkin tidak berfungsi seperti yang diharapkan.
Konsol Google Cloud menampilkan perangkat sebagai dihapus, metadata instance komputasi
(perintah compute disks describe) menampilkan perangkat sebagai dihapus, tetapi titik pemasangan
perangkat dan symlink apa pun yang dibuat oleh aturan udev masih terlihat di
sistem operasi tamu.
Pesan error:
Mencoba membaca dari disk yang dilepas di VM akan menyebabkan error I/O:
sudo head /dev/nvme0n3
head: error reading '/dev/nvme0n3': Input/output error
Masalah:
Image sistem operasi yang menggunakan kernel Linux 6.0-6.2, tetapi tidak menyertakan
backport perbaikan NVMe,
tidak dapat mengenali saat disk NVMe dilepas.
Resolusi:
Mulai ulang VM untuk menyelesaikan proses penghapusan disk.
Untuk menghindari masalah ini, gunakan sistem operasi dengan
versi kernel Linux yang tidak memiliki masalah ini:
5.19 atau yang lebih baru
6.3 atau yang lebih baru
Anda dapat menggunakan perintah uname -r di OS tamu untuk melihat versi kernel
Linux.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eThis document details common errors encountered when using disks with the NVMe interface, including I/O timeout errors, detached disk visibility issues, and symlink creation failures.\u003c/p\u003e\n"],["\u003cp\u003eI/O timeout errors on NVMe devices can be resolved by increasing the timeout parameter, typically by modifying the \u003ccode\u003e/lib/udev/rules.d/65-gce-disk-naming.rules\u003c/code\u003e file, although most Google-provided OS images already have this change implemented.\u003c/p\u003e\n"],["\u003cp\u003eDetached NVMe disks might remain visible in the guest OS on VMs using Linux kernel versions 6.0 to 6.2 without a specific NVMe fix; rebooting the VM is required to fully detach the disk, and using kernel versions 5.19 or older, or 6.3 or newer, avoids this issue.\u003c/p\u003e\n"],["\u003cp\u003eC3 and C3D VMs with Local SSDs and certain public SUSE images or custom images might require manual configuration of \u003ccode\u003eudev\u003c/code\u003e rules to create the necessary symlinks for Local SSD devices, which involves updating or creating the \u003ccode\u003e65-gce-disk-naming.rules\u003c/code\u003e and \u003ccode\u003egoogle_nvme_id\u003c/code\u003e files.\u003c/p\u003e\n"]]],[],null,["# Troubleshooting NVMe disks\n\n*** ** * ** ***\n\nThis document lists errors that you might encounter when using disks with the\nnonvolatile memory express (NVMe) interface.\n\nYou can use the NVMe interface for Local SSDs and persistent disks (Persistent Disk\nor Google Cloud Hyperdisk). Only the\nmost recent machine series, such as Tau T2A, M3, C3, C3D, and H3 use the\n[NVMe](http://wikipedia.org/wiki/NVM_Express) interface for\nPersistent Disk. Confidential VMs also use NVMe for Persistent Disk. All other\nCompute Engine machine series use the\n[SCSI](http://wikipedia.org/wiki/SCSI) disk interface for\npersistent disks.\n\nI/O operation timeout error\n---------------------------\n\nIf you are encountering I/O timeout errors, latency could be exceeding the\ndefault timeout parameter for I/O operations submitted to NVMe devices.\n\n**Error message**: \n\n```\n[1369407.045521] nvme nvme0: I/O 252 QID 2 timeout, aborting\n[1369407.050941] nvme nvme0: I/O 253 QID 2 timeout, aborting\n[1369407.056354] nvme nvme0: I/O 254 QID 2 timeout, aborting\n[1369407.061766] nvme nvme0: I/O 255 QID 2 timeout, aborting\n[1369407.067168] nvme nvme0: I/O 256 QID 2 timeout, aborting\n[1369407.072583] nvme nvme0: I/O 257 QID 2 timeout, aborting\n[1369407.077987] nvme nvme0: I/O 258 QID 2 timeout, aborting\n[1369407.083395] nvme nvme0: I/O 259 QID 2 timeout, aborting\n[1369407.088802] nvme nvme0: I/O 260 QID 2 timeout, aborting\n...\n```\n\n\u003cbr /\u003e\n\n**Resolution**:\n\nTo resolve this issue, increase the value of the timeout parameter.\n| **Note:** Most of the operating system images provided by Google already include this change.\n\n1. View the current value of the timeout parameter.\n\n 1. Determine which NVMe controller is used by the persistent disk or Local SSD volume. \n\n ```\n ls -l /dev/disk/by-id\n ```\n 2. Display the `io_timeout` setting, specified in seconds, for the disk.\n\n ```\n cat /sys/class/nvme/CONTROLLER_ID/NAMESPACE/queue/io_timeout\n ```\n Replace the following:\n\n \u003cbr /\u003e\n\n - \u003cvar translate=\"no\"\u003eCONTROLLER_ID\u003c/var\u003e: the ID of the NVMe disk controller, for example, `nvme1`\n - \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the namespace of the NVMe disk, for example, `nvme1n1`\n\n If you only have a single disk that uses NVMe, then use the command: \n\n ```\n cat /sys/class/nvme/nvme0/nvme0n1/queue/io_timeout\n ```\n\n \u003cbr /\u003e\n\n2. To increase the timeout parameter for I/O operations submitted to NVMe\n devices, add the following line to the\n `/lib/udev/rules.d/65-gce-disk-naming.rules` file, and then restart the VM:\n\n ```\n KERNEL==\"nvme*n*\", ENV{DEVTYPE}==\"disk\", ATTRS{model}==\"nvme_card-pd\", ATTR{queue/io_timeout}=\"4294967295\"\n ```\n\nDetached disks still appear in the operating system of a compute instance\n-------------------------------------------------------------------------\n\nOn VMs that use Linux kernel version 6.0 to 6.2, operations\ninvolving the Compute Engine API method `instances.detachDisk` or the\n`gcloud compute instances detach-disk` command might not work as expected.\nThe Google Cloud console shows the device as removed, the compute instance metadata\n(`compute disks describe` command) shows the device as removed, but the device\nmount point and any symlinks created by udev rules are still visible in the\nguest operating system.\n\n**Error message**:\n\nAttempting to read from the detached disk on the VM results in I/O errors: \n\n```\nsudo head /dev/nvme0n3\n\nhead: error reading '/dev/nvme0n3': Input/output error\n```\n\n**Issue**:\n\nOperating system images that use a Linux 6.0-6.2 kernel but don't include\na backport of a [NVMe fix](https://github.com/torvalds/linux/commit/0dd6fff2aad4e35633fef1ea72838bec5b47559a)\nfail to recognize when an NVMe disk is detached.\n\n**Resolution**:\n\nReboot the VM to complete the process of removing the disk.\n\nTo avoid this issue, use an operating system with a\nLinux kernel version that doesn't have this problem:\n\n- 5.19 or older\n- 6.3 or newer\n\nYou can use the `uname -r` command in the guest OS to view the Linux kernel\nversion.\n\nWhat's next?\n------------\n\n- Learn about [Persistent Disk](/compute/docs/disks/persistent-disks).\n- Learn about [Local SSDs](/compute/docs/disks/local-ssd).\n- [Configure disks to meet performance requirements](/compute/docs/disks/performance).\n- Learn about [symlinks](/compute/docs/disks/disk-symlinks)."]]