Hypervisor CPU 调度程序按需将虚拟 CPU 和内存动态映射到宿主机服务器的物理 CPU 和内存。这种动态管理方式通过更好地利用物理资源以提高虚拟机的成本效益。高效利用资源意味着 Compute Engine 可以在更少的服务器上更高效地运行虚拟机,使 Google Cloud 能够帮用户节省开支。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eN4 VMs and E2 VMs utilize dynamic resource management, powered by technologies like performance-aware live migration and a custom-built CPU scheduler, to optimize the use of physical resources and drive cost efficiency.\u003c/p\u003e\n"],["\u003cp\u003eDynamic resource management enhances VM performance through intelligent placement, predicting resource needs, and minimizing wait times by scheduling vCPU threads on physical CPUs based on demand.\u003c/p\u003e\n"],["\u003cp\u003eCompute Engine's dynamic resource management continuously monitors VM performance and uses live migration to transparently move workloads to different hosts if resource demands increase.\u003c/p\u003e\n"],["\u003cp\u003eE2 VMs use a virtio memory balloon device for memory ballooning, allowing the host to dynamically reclaim unused guest memory and use it for other processes, thus improving memory efficiency.\u003c/p\u003e\n"],["\u003cp\u003eWhile most Linux distributions come with the virtio memory balloon driver, users are able to disable it on both Linux and Windows VMs, although this may affect the accuracy of rightsizing recommendations.\u003c/p\u003e\n"]]],[],null,["*** ** * ** ***\n\n[N4 VMs](/compute/docs/general-purpose-machines#n4_series), powered by 5th\ngeneration Intel Xeon processors and\n[Titanium](/titanium), use next generation dynamic\nresource management to drive cost efficiency by making better use of the\nphysical resources available on host machines, and also uses a custom-built CPU\nscheduler and performance-aware live migration to balance workload performance needs\nwith available resources. These are the same technologies that\nGoogle Search, Google Ads, Google Maps, and YouTube\nservices use to run their latency-sensitive workloads efficiently.\n\nNext generation dynamic resource management also has better NUMA affinity,\nmore accurate prediction of resource requirements, and faster rebalancing using\nperformance-aware live migration.\n\nHow dynamic resource management works\n\nVirtual CPUs (vCPUs) are implemented as threads that are scheduled\nto run on demand like any other thread on a host. When the vCPU has work to do,\nthe work is assigned to an available physical CPU on which to run until it\ngoes to sleep again. Similarly, virtual RAM is mapped to physical host pages\nusing page tables that are populated when a guest-physical page is first accessed.\nThis mapping remains fixed until the VM indicates that a guest-physical page is\nno longer needed.\n\nDynamic resource management enables Compute Engine to better use\nthe available physical CPUs by scheduling VMs to servers based on resource\ndemand, and scheduling vCPU threads to physical CPUs such that wait time is\nminimized. In most cases, we can do this seamlessly, so Google Cloud can\nrun VMs more efficiently on fewer servers.\n\n\u003cbr /\u003e\n\nComponents of dynamic resource management\n\nCompute Engine uses the following technologies for dynamic resource\nmanagement:\n\nLarger, more efficient physical servers\n\nCore count and RAM density have steadily increased such that now the host\nservers have far more resources than any individual VM. Google continually\nbenchmarks new hardware and looks for platforms that are cost-effective and\nperform well for the widest variety of cloud workloads and services, allowing\nyou to take advantage of the newest technologies when they're available.\n\nIntelligent VM placement\n\nGoogle's cluster management system observes the CPU, RAM, memory bandwidth, and\nother resource demands of VMs running on a physical server. It uses this\ninformation to predict how a newly added VM will perform on that server. It then\nsearches across thousands of servers to find the best location to add a VM.\nThese observations ensure that when a new VM is placed, it is compatible with\nits neighbors and unlikely to experience interference from those instances.\n\nPerformance-aware live migration\n\nAfter VMs are placed on a host, Compute Engine continuously monitors\nVM performance and wait times. If the resource demands of the VMs increase,\nCompute Engine can use\n[live migration](/compute/docs/instances/live-migration-process)\nto transparently shift workloads to other hosts in the data center.\nThe live migration policy is guided by a predictive approach that gives\nCompute Engine time to shift the load, often before any wait time is\nexperienced by the VMs.\n\nHypervisor CPU scheduler\n\nThe hypervisor CPU scheduler dynamically maps virtual CPU and memory\nto the physical CPU and memory of the host server on demand. This dynamic\nmanagement drives cost efficiency in VMs by making better use of the\nphysical resources. Efficient use of resources means Compute Engine\ncan run VMs more efficiently on fewer servers, allowing Google Cloud to pass on\nsavings to users.\n| **Note:** Dynamic resource management only applies to [N4](/compute/docs/general-purpose-machines#n4_machine_types) and [E2](/compute/docs/general-purpose-machines#e2_machine_types) VMs.\n\nFirst generation dynamic resource management\n\nE2 was the first VM series to offer dynamic resource management using a virtio memory balloon device.\n\nVirtio memory balloon device with E2 VMs\n\nMemory ballooning is an interface mechanism between host and guest to\ndynamically adjust the size of the reserved memory for the guest. E2 uses a\n[virtio memory balloon device](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-2790005)\nto implement memory ballooning. Through the virtio memory balloon\ndevice, a host can explicitly ask a guest to yield a certain amount of free\nmemory pages (also called memory balloon inflation), and reclaim the memory so\nthat the host can use the free memory for other VMs. Likewise, the virtio memory\nballoon device can return memory pages back to the guest by deflating the\nmemory balloon. E2 VMs are the only machine family that uses the memory\nballoon device.\n\nCompute Engine E2 VM instances that are based on a\n[public image](/compute/docs/images#os-compute-support) have a\n[virtio memory balloon device](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-2790005),\nwhich monitors the guest operating system's memory use. The guest\noperating system communicates its available memory to the host system. The host\nreallocates any unused memory to other processes on demand, thereby using memory\nmore effectively. Compute Engine collects and uses this data\nto make more accurate [rightsizing recommendations](/compute/docs/instances/apply-machine-type-recommendations-for-instances).\n\nVerifying the driver installation\n\nTo check if your image has the virtio memory balloon device driver installed and\nloaded, run the following command. \n\nLinux\n\nMost Linux distributions include the virtio memory balloon device driver.\nTo verify that your image has the driver installed and loaded, run: \n\n```\nsudo modinfo virtio_balloon \u003e /dev/null && echo Balloon driver is \\\ninstalled || echo Balloon driver is not installed; sudo lsmod | grep \\\nvirtio_balloon \u003e /dev/null && echo Balloon driver is loaded || echo \\\nBalloon driver is not loaded\n```\n\nIn [Linux kernels](https://en.wikipedia.org/wiki/Linux_kernel_version_history)\nbefore 5.2, the Linux memory system sometimes mistakenly\nprevents large allocations when the balloon device is present. This is\nrarely an issue in practice, but we recommend changing the\nvirtual memory `overcommit_memory` setting to `1` to prevent the issue\nfrom occurring. This change is already made by default in all\nGoogle-provided images published since February 9, 2021.\n\nTo fix the setting, use the following command to change the value from\n`0` to `1`: \n\n```\nsudo /sbin/sysctl -w vm.overcommit_memory=1\n```\n\nTo persist this change across reboots, add the following to your\n`/etc/sysctl.conf` file: \n\n```\nvm.overcommit_memory=1\n```\n\nWindows\n\nCompute Engine [Windows images](/compute/docs/instances/windows#windows_server)\ninclude the virtio balloon device. However, custom Windows images don't. To\nverify whether your Windows image has the driver installed, run: \n\n googet verify google-compute-engine-driver-balloon\n\nDisabling the virtio memory balloon device\n\nUsing the virtio memory balloon device enables Compute Engine to\nutilize memory resources more effectively so Google Cloud can offer E2 VMs at\nlower prices. You can opt out of the virtio memory balloon device by disabling\nthe device driver. After disabling the virtio memory balloon device, you will\ncontinue to receive [rightsizing recommendations](/compute/docs/instances/apply-machine-type-recommendations-for-instances);\nhowever, they might not be as accurate. \n\nLinux\n\nTo disable the device in Linux, run the following command: \n\n sudo rmmod virtio_balloon\n\nYou can add this command to the VM's\n[startup script](/compute/docs/startupscript) to automatically disable the\ndevice upon VM boot.\n\nWindows\n\nTo disable the device on Windows, run the following command: \n\n googet -noconfirm remove google-compute-engine-driver-balloon\n\nYou can put this command into the VM's\n[startup script](/compute/docs/startupscript#providing_a_startup_script_for_windows_instances)\nto automatically disable the device upon VM boot.\n\nWhat's next\n\n- Read the blog about [Dynamic resource management](/blog/products/compute/understanding-dynamic-resource-management-in-e2-vms).\n- Review the [N4 machine series](/compute/docs/general-purpose-machines#n4_series) information.\n- Review the [E2 machine series](/compute/docs/general-purpose-machines#e2_machine_types) information.\n- [Create a VM](/compute/docs/instances/create-start-instance)."]]