These predefined roles contain
the permissions required to rescue a VM. To see the exact permissions that are
required, expand the Required permissions section:
Required permissions
The following permissions are required to rescue a VM:
compute.instances.create
on project
compute.disks.create
on project
compute.instances.get
on project
compute.disks.createSnapshot
on disks
compute.instances.attachDisk
on new VM
compute.disks.use
on disk
compute.instances.start
on new and inaccessible VM
For each of the snapshots of the inaccessible VM's boot disks you
previously created, create a new disk from the snapshot and attach it to
the rescue VM by doing the following:
In the Google Cloud console, go to the VM instances page.
Identify the name of each of the disks that you previously attached to
the VM by running the following command:
lsblk -d -o NAME,SERIAL
The output is similar to the following:
NAME SERIAL
sda rescue-vm
sdb my-recovery-disk
In this example, rescue-vm is the boot disk of the rescue VM and
my-recovery-disk is the boot disk from the snapshot of the inaccessible
VM. Note the NAME of the inaccessible VM for use in the next
step.
For each of the disks that you previously attached to the VM, do the
following:
Identify the file system of each partition by running the following
command:
fdisk -l /dev/NAME -o Device,Size,Type
Replace NAME with the name of the inaccessible
VM's boot disk from the previous step. In this example, the name would
be sdb.
The output is similar to the following:
Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: PersistentDisk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B31430F1-F041-4555-96B9-B2F43DC057AD
Device Size Type
/dev/sdb1 2M BIOS boot
/dev/sdb2 20M EFI System
/dev/sdb3 10G Linux filesystem
The Type column lists the file system of each partition. If the file
system type is missing for any partitions, run the following command:
file -sL /dev/PARTITION_NAME
Replace NAME with the name of the partition.
The output differs depending on the file system type:
No file system: If the output only displays data, the partition
doesn't contain a file system. Example output:
/dev/sdb1: data
EFI file system: If the output describes a DOS/MBR boot sector,
the partition has an EFI file system. Example output:
Replace PARTITION_NAME with the name of the Linux file system
you previously noted.
If you want to modify the root directory of the file system using the
chroot command, you must additionally mount the virtual file system
and devices by running the following commands:
sudo mount -t proc /proc /rescue/proc
sudo mount -t sysfs /sys /rescue/sys
sudo mount -o bind /dev /rescue/dev
sudo mount -o bind /dev/pts /rescue/dev/pts
sudo mount -o bind /run /rescue/run
The inaccessible boot disk's file system is now mounted at /rescue.
You can navigate the file system, change config files, fix issues or
retrieve the data.
Revert the changes and boot the inaccessible VM back
After the issue is fixed or data is retrieved, you need to bring back the actual
VM. Use the following steps to restore the original VM:
Unmount the additional disk which is mounted at /rescue in the
temporary VM:
cd ~
sudo umount /rescue
In the Google Cloud console, go to the VM instances page.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eThis document outlines the steps to rescue an inaccessible Linux VM by creating a temporary VM and attaching a snapshot of the inaccessible VM's boot disk.\u003c/p\u003e\n"],["\u003cp\u003eSpecific IAM roles, such as Compute Instance Admin (v1) and Service Account User, are required to perform VM rescue operations, which are needed to access the necessary permissions for creating, attaching, and managing disks and instances.\u003c/p\u003e\n"],["\u003cp\u003eTo rescue a VM, you must create a snapshot of the inaccessible VM's boot disk, create a temporary VM, attach the snapshot as a disk to the temporary VM, and then mount and access the file system for data retrieval or issue resolution.\u003c/p\u003e\n"],["\u003cp\u003eAfter resolving issues or retrieving data, the process involves unmounting the recovery disk from the temporary VM, detaching the original boot disk from the inaccessible VM, attaching the fixed disk, and restarting the original VM.\u003c/p\u003e\n"],["\u003cp\u003eConnecting to the temporary and the fixed VM is done via ssh, which can be done by following the documentation provided.\u003c/p\u003e\n"]]],[],null,["# Rescue an inaccessible VM\n\nLinux\n\n*** ** * ** ***\n\nIf your Linux VM is inaccessible due to any reason, you can try rescue the\nVM using the following steps.\n\nRequired roles\n--------------\n\n\nTo get the permissions that\nyou need to rescue a VM,\n\nask your administrator to grant you the\nfollowing IAM roles on the project:\n\n- [Compute Instance Admin (v1)](/iam/docs/roles-permissions/compute#compute.instanceAdmin.v1) (`roles/compute.instanceAdmin.v1`)\n- VMs that use a service account: [Service account user](/iam/docs/roles-permissions/iam#iam.serviceAccountUser) (`roles/iam.serviceAccountUser`)\n\n\nFor more information about granting roles, see [Manage access to projects, folders, and organizations](/iam/docs/granting-changing-revoking-access).\n\n\nThese predefined roles contain\n\nthe permissions required to rescue a VM. To see the exact permissions that are\nrequired, expand the **Required permissions** section:\n\n\n#### Required permissions\n\nThe following permissions are required to rescue a VM:\n\n- ` compute.instances.create` on project\n- ` compute.disks.create` on project\n- ` compute.instances.get` on project\n- ` compute.disks.createSnapshot` on disks\n- ` compute.instances.attachDisk` on new VM\n- ` compute.disks.use` on disk\n- ` compute.instances.start` on new and inaccessible VM\n- ` compute.instances.stop` on new and inaccessible VM\n\n\nYou might also be able to get\nthese permissions\nwith [custom roles](/iam/docs/creating-custom-roles) or\nother [predefined roles](/iam/docs/roles-overview#predefined).\n\nRescue a VM\n-----------\n\nIf you can't connect to your VM, or your boot disk is full, you must\ncreate a temporary VM to rescue the inaccessible VM.\n\n1. (Optional) Stop the inaccessible VM.\n2. [Create a snapshot](/compute/docs/disks/create-snapshots#creating_snapshots) from the boot disk of the inaccessible VM. If the root file system is split across multiple disks, you must snapshot each disk.\n3. [Create a temporary VM using a public image closest to inaccessible VM's OS](/compute/docs/images#list_of_public_images_available_on). In some cases a trusted image policy might restrict you from creating boot disks from public images. In such cases you must ask an administrator to temporarily lift this restriction before you can create a rescue VM. See [Set image access constraints](/compute/docs/images/restricting-image-access#trusted_images) for more information.\n4. For each of the snapshots of the inaccessible VM's boot disks you\n previously created, create a new disk from the snapshot and attach it to\n the rescue VM by doing the following:\n\n 1. In the Google Cloud console, go to the **VM instances** page.\n\n [Go to VM instances](https://console.cloud.google.com/compute/instances)\n 2. Click the name of the temporary VM that you created.\n\n 3. Click edit **Edit**.\n\n 4. Under **Additional disks** , click\n add**Add new disk**, and then do\n the following:\n\n 1. Add the disk name, like \u003cvar translate=\"no\"\u003emy-recovery-disk\u003c/var\u003e\n 2. For **Source type** , select the **Snapshot** tab.\n 3. In the **Source snapshot** drop-down menu, select the snapshot of the source VM that you created earlier in these steps.\n 4. Click **Done**.\n 5. Click **Save**.\n\n5. [Connect to the temporary VM](/compute/docs/instances/connecting-to-instance#connecting_to_vms)\n using SSH.\n\n6. Identify the name of each of the disks that you previously attached to\n the VM by running the following command:\n\n \u003cbr /\u003e\n\n ```\n lsblk -d -o NAME,SERIAL\n ```\n\n \u003cbr /\u003e\n\n The output is similar to the following:\n\n \u003cbr /\u003e\n\n ```\n NAME SERIAL\n sda rescue-vm\n sdb my-recovery-disk\n \n ```\n\n \u003cbr /\u003e\n\n In this example, `rescue-vm` is the boot disk of the rescue VM and\n `my-recovery-disk` is the boot disk from the snapshot of the inaccessible\n VM. Note the `NAME` of the inaccessible VM for use in the next\n step.\n7. For each of the disks that you previously attached to the VM, do the\n following:\n\n 1. Identify the file system of each partition by running the following\n command:\n\n ```\n fdisk -l /dev/NAME -o Device,Size,Type\n ```\n\n Replace \u003cvar translate=\"no\"\u003eNAME\u003c/var\u003e with the name of the inaccessible\n VM's boot disk from the previous step. In this example, the name would\n be `sdb`.\n\n The output is similar to the following: \n\n ```\n Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors\n Disk model: PersistentDisk\n Units: sectors of 1 * 512 = 512 bytes\n Sector size (logical/physical): 512 bytes / 4096 bytes\n I/O size (minimum/optimal): 4096 bytes / 4096 bytes\n Disklabel type: gpt\n Disk identifier: B31430F1-F041-4555-96B9-B2F43DC057AD\n\n Device Size Type\n /dev/sdb1 2M BIOS boot\n /dev/sdb2 20M EFI System\n /dev/sdb3 10G Linux filesystem\n ```\n\n The `Type` column lists the file system of each partition. If the file\n system type is missing for any partitions, run the following command: \n\n ```\n file -sL /dev/PARTITION_NAME\n ```\n\n Replace \u003cvar translate=\"no\"\u003eNAME\u003c/var\u003e with the name of the partition.\n\n The output differs depending on the file system type:\n - **No file system** : If the output only displays `data`, the partition\n doesn't contain a file system. Example output:\n\n ```\n /dev/sdb1: data\n ```\n - **EFI file system**: If the output describes a DOS/MBR boot sector,\n the partition has an EFI file system. Example output:\n\n ```\n dev/sdb2: DOS/MBR boot sector, code offset 0x3c+2, OEM-ID \"mkfs.fat\", sectors/cluster 4, reserved sectors\n 4, root entries 512, sectors 40960 (volumes \u003c=32 MB), Media descriptor 0xf8, sectors/FAT 40, sectors/\n track 32, heads 64, serial number 0xf2af2664, label: \"EFI \", FAT (16 bit)\n ```\n - **Linux file system**: If the output describes file system data, the\n partition is a Linux file system. Example output:\n\n ```\n /dev/sdb3: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)\n ```\n\n Note the partition name of the Linux file system.\n 2. Create a mount point at `/rescue`:\n\n ```\n sudo mkdir /rescue\n ```\n 3. Mount the Linux file system partition to `/rescue`:\n\n ```\n sudo mount PARTITION_NAME /rescue\n ```\n\n Replace \u003cvar translate=\"no\"\u003ePARTITION_NAME\u003c/var\u003e with the name of the Linux file system\n you previously noted.\n 4. If you want to modify the root directory of the file system using the\n `chroot` command, you must additionally mount the virtual file system\n and devices by running the following commands:\n\n ```\n sudo mount -t proc /proc /rescue/proc\n sudo mount -t sysfs /sys /rescue/sys\n sudo mount -o bind /dev /rescue/dev\n sudo mount -o bind /dev/pts /rescue/dev/pts\n sudo mount -o bind /run /rescue/run\n ```\n\n The inaccessible boot disk's file system is now mounted at `/rescue`.\n You can navigate the file system, change config files, fix issues or\n retrieve the data.\n\nRevert the changes and boot the inaccessible VM back\n----------------------------------------------------\n\nAfter the issue is fixed or data is retrieved, you need to bring back the actual\nVM. Use the following steps to restore the original VM:\n\n1. Unmount the additional disk which is mounted at `/rescue` in the\n temporary VM:\n\n \u003cbr /\u003e\n\n ```\n cd ~\n sudo umount /rescue\n ```\n\n \u003cbr /\u003e\n\n2. In the Google Cloud console, go to the **VM instances** page.\n\n [Go to VM instances](https://console.cloud.google.com/compute/instances)\n 1. Select the temporary VM that you created.\n\n 2. Click edit**Edit**.\n\n 3. Under **Additional disks** , click close\n for the disk created in earlier steps to detach the additional\n disk from the temporary VM.\n\n 4. Click **Save**.\n\n3. Go to the **VM instances** page in the Google Cloud console.\n\n [Go to VM instances](https://console.cloud.google.com/compute/instances)\n 1. If the inaccessible VM is still running, [stop the VM](/compute/docs/instances/stop-start-instance#stop-vm).\n\n 2. Click the name of the VM you just stopped, and then click\n edit**Edit**.\n\n 3. Under **Boot disk** , click close\n **Detach book disk** to detach the exiting boot disk from\n the inaccessible VM.\n\n 4. Next, click edit **CONFIGURE BOOT\n DISK** to attach the disk you created and fixed previously in\n [Rescue a VM](#rescue) on this page.\n\n 1. In the **Boot Disk** section, click the **Existing disks** tab.\n 2. In the drop-down list, select the disk that you created in the previous section, for example `my-recovery-disk`.\n 3. Click **Select** and then click **Save**.\n 5. [Start the VM](/compute/docs/instances/stop-start-instance#starting_a_stopped_instance).\n\n4. You should now be able to [connect to the VM](/compute/docs/instances/connecting-to-instance#connecting_to_vms)\n using SSH."]]