檢查 portmapper 或 rpcbind 服務是否正在執行。執行:# sudo service rpcbind status
Red Hat RHEL 6 或 CentOS Linux 主機應傳回類似以下的內容:
rpcbind (pid 1591) is running...
SLES Linux 主機應傳回類似以下的內容:
Checking for service rpcbind running
如果 rpcbind 服務未在 Linux 主機上執行,請使用以下指令啟動:
# sudo service rpcbind start
使用 rpcinfo 列出已註冊的 RPC 程式或服務。Portmapper 必須註冊並執行。
# sudo rpcinfo -p
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
請使用下列指令,確認 Linux 主機是否可對備份/復原裝置上的 rpcbind 和 NFS 程式發出 RPC 呼叫。
# sudo rpcinfo -T tcp <#vm internal IP> rpcbind
program 100000 version 2 ready and waiting
program 100000 version 3 ready and waiting
program 100000 version 4 ready and waiting
# sudo rpcinfo -T tcp <#vm internal IP> nfs
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
如果上述指令傳回上述輸出內容,表示從 Linux 主機到備份/復原裝置的 NFS 連線正常。
規劃暫存磁碟大小
對於某些大型檔案系統,您可能需要手動設定檔案系統的暫存磁碟大小。預設的暫存磁碟大小為 NAS capacity + 20%,但在以下兩種情況下,這個大小可能不足:
NFS 和 SMB 網路檔案系統有時會誤報非常大的容量。如果檔案系統回報的大小超過 128 TiB,備份和 DR 代理程式會在備份失敗時顯示錯誤代碼 5289:「受保護磁區的回報大小需要為此應用程式指定暫存磁碟大小」。這個錯誤會導致備份和 DR 服務無法配置不必要的大型磁碟,或分配的磁碟大於備份/復原裝置可處理的大小。
即使 NAS 在磁碟上使用重複資料刪除和壓縮功能,備份和災難復原服務也不會在暫存磁碟的備份映像檔中刪除或壓縮資料。NAS 可能會回報使用量為 5 TB,但暫存磁碟上的備份映像檔可能會佔用更多空間。這也需要管理員指定手動暫存磁碟大小。這可能會導致「暫存磁碟已滿」錯誤。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eThis guide provides instructions on ensuring NFS connectivity for Backup and DR services on both Red Hat/CentOS and SLES Linux hosts.\u003c/p\u003e\n"],["\u003cp\u003eTo verify if the NFS client is installed, run \u003ccode\u003erpm -qa | grep nfs\u003c/code\u003e, and if not found, use \u003ccode\u003eyum\u003c/code\u003e or \u003ccode\u003eYaST/zypper\u003c/code\u003e to install \u003ccode\u003enfs-utils\u003c/code\u003e and \u003ccode\u003enfs-client\u003c/code\u003e respectively.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003erpcbind\u003c/code\u003e service, or portmapper, is essential for NFS operations and needs to be installed (using \u003ccode\u003eyum\u003c/code\u003e or \u003ccode\u003eYaST/zypper\u003c/code\u003e if it is not present) and running, verified with \u003ccode\u003e# sudo service rpcbind status\u003c/code\u003e and started with \u003ccode\u003e# sudo service rpcbind start\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eYou can confirm NFS connectivity by using \u003ccode\u003erpcinfo\u003c/code\u003e commands to check for registered RPC programs and to verify that the Linux host can make RPC calls to \u003ccode\u003erpcbind\u003c/code\u003e and NFS on the backup/recovery appliance.\u003c/p\u003e\n"],["\u003cp\u003eStaging disk size may need manual adjustment due to large file system capacity reporting errors or because the Backup and DR service does not deduplicate data on staging disks, and it is possible to use exclude patterns to avoid backing up \u003ccode\u003e.snapshot\u003c/code\u003e directories.\u003c/p\u003e\n"]]],[],null,["# Prepare Filestore and other file systems for Backup and DR Service\n\nThis page includes information on how to ensure there is NFS connectivity for\nBackup and DR.\n\nInstall the NFS client on a Red Hat RHEL 6 or CentOS Linux host\n---------------------------------------------------------------\n\nSee if the client is installed by running: `# rpm -qa | grep nfs`\n\nThis should return something like: \n\n nfs-utils-lib-1.1.5-9.el6.x86_64\n nfs-utils-1.2.3-54.el6.x86_64\n\n- If you see nothing, then use yum to install the NFS client packages. Run: `# yum install nfs-utils nfs-utils-lib`\n- Make sure rpcbind /portmapper package is installed on the Linux host. Run:`# rpm -qa | grep rpcbind`\n\nThis should return something like: `rpcbind-0.2.0-11.el6.x86_64`\n\n- If you see nothing, then use yum to install the rpcbind. Run: `# yum install rpcbind`\n\nInstall the NFS client on a SLES Linux host\n-------------------------------------------\n\n1. To see if the client is installed, run: `# rpm -qa | grep nfs`\n\n This should return something similar to: \n\n nfs-client-1.2.1-2.6.6\n yast2-nfs-common-2.17.7-1.1.2\n yast2-nfs-client-2.17.12-0.1.81\n\n2. If you don't see either nfs-client or yast2-nfs-xxxx packages, then use\n either YaST or zypper to install the NFS client packages.\n\n - Using YaST:\n\n # yast2 --install yast2-nfs-client\n # yast2 --install yast2-nfs-common\n\n - Using Zypper:`# zypper install nfs-client`\n3. Ensure that rpcbind or portmapper package is installed on the Linux host. Run:\n `# rpm -qa | grep rpcbind`\n This should return something like: rpcbind-0.1.6+git20080930-6.15\n\n4. If you see nothing, then you must install the packages using either YaST or\n zypper:\n\n - Using YaST: `# yast2 --install rpcbind`\n - Using Zypper: `# zypper install rpcbind`\n\nLearn NFS client information from the Linux host\n------------------------------------------------\n\nA Backup and DR-approved NFS client package and version must be installed on\nthe host.\n\n1. Check if the portmapper or rpcbind service is running. Run:\n `# sudo service rpcbind status`\n\n - A Red Hat RHEL 6 or CentOS Linux host should return something like: `rpcbind (pid 1591) is running...`\n - An SLES Linux host should return something like: `Checking for service rpcbind running`\n2. If rpcbind service is not running on Linux host, start it with:\n `# sudo service rpcbind start`\n\n3. Use rpcinfo to list the registered RPC programs or services. Portmapper\n must be registered and running.\n\n # sudo rpcinfo -p\n program vers proto port service\n 100000 4 tcp 111 portmapper\n 100000 3 tcp 111 portmapper\n 100000 2 tcp 111 portmapper\n 100000 4 udp 111 portmapper\n 100000 3 udp 111 portmapper\n 100000 2 udp 111 portmapper\n\n4. Check if the Linux host can make an RPC call to rpcbind and NFS programs on\n the backup/recovery appliance using the following.\n\n # sudo rpcinfo -T tcp \u003c#vm internal IP\u003e rpcbind\n program 100000 version 2 ready and waiting\n program 100000 version 3 ready and waiting\n program 100000 version 4 ready and waiting\n # sudo rpcinfo -T tcp \u003c#vm internal IP\u003e nfs\n program 100003 version 2 ready and waiting\n program 100003 version 3 ready and waiting\n\nIf the preceding commands return the output shown earlier, then NFS connectivity from\nLinux host to the backup/recovery appliance is good.\n\nPlanning staging disk size\n--------------------------\n\nFor some large filesystems, you may have to manually set the staging disk size\nfor the file system. The default staging disk size is `NAS capacity + 20%`,\nbut there are two cases in which this may be insufficient:\n\n- NFS and SMB network file systems sometimes incorrectly report very large\n capacities. In cases where the file system reports that it is over 128TiB the\n Backup and DR agent fails the backup with error code 5289:\n \"The reported size of the protected volume requires that the staging disk size\n is specified for this application\". This error prevents Backup and DR Service\n from allocating a huge disk that is not needed or that is larger than the\n backup/recovery appliance can handle.\n\n- Even if your NAS is using dedup and compression on its disks,\n Backup and DR Service does not deduplicate or compress data in the backup image on\n the staging disk. Your NAS may report usage of 5TB, but the backup image on\n the staging disk may use significantly more space. This also requires that the\n administrators specify a manual staging disk size. This may result in a\n \"staging disk full\" error.\n\nIf you see either of these errors, then manually set the staging disk size in\n[Application Details and Settings](/backup-disaster-recovery/docs/backup/app-details-settings-filesystems).\n\nVirtual snapshots in a .snapshot directory\n------------------------------------------\n\nSometimes on the NAS there are .snapshot directories containing a full copy of\nthe NAS contents. These are virtual snapshots of the NAS. The Backup and DR agent\ntries to copy all of those snapshots and runs out of space. You can remedy this\nby using an exclude pattern of `.snapshot` or `~snapshot` (whatever name the NAS\nuses). See [Exclude patterns in](/backup-disaster-recovery/docs/backup/app-details-settings-filesystems).\n\nAdditional information for preparing file system hosts\n------------------------------------------------------\n\nAdditional information relevant to preparing a file system host for protection\nare in [Manage hosts and their connected applications](/backup-disaster-recovery/docs/configuration/manage-hosts-and-their-applications)."]]