Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

Unable to mount NFS datastore (1005948)

Symptoms

  • An ESXi/ESX host cannot mount a NFS datastore.
  • The /var/log/messages (ESXi) or /var/log/vmkernel (ESX) log files contain errors similar to:

    Jun 15 13:01:39 esx-02 vmkernel: 2:13:51:38.221 cpu2:4287)WARNING: NFS: 201: Got error 13 from mount call
    Jun 15 13:01:39 esx-02 vmkernel: 2:13:51:38.221 cpu9:4262)WARNING: NFS: 944: MOUNT failed with MOUNT status 13 (Permission denied) trying to mount Server (192.168.10.10) Path (/opt/esx-mounts)

  • The vobd.log file (located at /var/log/) on the ESXi 5.x host contains the errors:

    [esx.problem.vmfs.nfs.mount.error.perm.denied] NFS mount ip-address:mountpoint failed: The mount request was denied by the NFS server. Check that the export exists and that the client is permitted to mount it.

Resolution

Note: You may see this issue if you have more than one vmkernel port on the same network segment. VMware recommends only having one vmkernel port per network segment unless port binding is being used. For more information on this recommendation, see Multi-homing on ESX/ESXi (2010877).

See this example:

Checking the vmkernel port configuration, you see there are two vmkernel ports on the 10.1.1.0 network:

# esxcfg-vmknic -l

Note: You see the output:

Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type
vmk2 Backup IPv4 10.1.1.23 255.255.255.0 10.1.1.255 00:50:56:7b:80:c8 1500 65535 true STATIC
vmk0 0 IPv4 10.1.1.33 255.255.255.0 10.1.1.255 9c:8e:99:fc:ed:ac 9000 65535

In this example, you want to use vmk0 for NFS connections (Jumbo Frames or MTU 9000 is enabled for this traffic).
Check the current routing tables to see which vmkernel port is the default for the 10.1.1.0 network by running the command:

# esxcfg-route -l

Note: You see the output:

VMkernel Routes:
Network Netmask Gateway Interface
10.1.1.0 255.255.255.0 Local Subnet vmk2
192.168.55.0 255.255.255.0 Local Subnet vmk3
default 0.0.0.0 10.1.1.1 vmk2

From the routing tables, you see that vmk2 is the default vmkernel interface for the 10.1.1.0 network and its currently being used for NFS traffic which is not desirable for these reasons:
  • It is not predictable which vmkernel port is used.
  • In this case, the non Jumbo Frames vmkernel port is used which results in lower performance.
  • The Access Control Lists (ACL) on the NFS server may not be the IP address for vmk2 in the ACL thus it refuses connection to the NFS export for this host.
If the host does not have two or more vmkernel ports on the same network use these troubleshooting steps:
  • Ensure the NFS server supports NFSv3 over TCP. ESX/ESXi does not use UDP for NFS.
  • The NFS server must be accessible in read-write mode by all ESX/ESXi hosts.
  • The NFS server must allow read-write access for the root system account (rw).
  • The NFS export must be set for either no_root_squash or chmod 1777 .
  • Ensure the ESX/ESXi VMkernel IP is allowed to mount the NFS share by inspecting the export list.
  • Ensure the mount is exported by running exportfs -a to re-export all NFS shares.
  • Check to ensure that the ESX/ESXi host is correctly configured to use an NFS share:

    • Ensure that there is a vmkernel port group.
    • Check the VMKernel IP address:
      1. Using the VI/vSphere Client, connect to Virtual Center/vCenter Server.
      2. Select the ESXi/ESX host.
      3. Click the Configuration tab.
      4. Click Networking.
      5. View the Networking diagram for the VMKernel or click Properties > Ports > VMKernel. If VMKernel is not listed, you must add it.
    • Check to see if the NFS server can be reached using vmkping.
    • You can try to ping the ESX/ESXi VMkernel IP from the NFS storage.
ESX Only
  • Check to ensure that the NFS service is ready to accept NFS connections from the ESX host.
  • Ensure NFS daemons are running on the server, using the command rpcinfo -p localhost or service nfs status:

    # rpcinfo -p localhost

    Note: You see the output:

    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100011 1 udp 925 rquotad
    100011 2 udp 925 rquotad
    100011 1 tcp 928 rquotad
    100011 2 tcp 928 rquotad
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 4 udp 2049 nfs
    100021 1 udp 60528 nlockmgr
    100021 3 udp 60528 nlockmgr
    100021 4 udp 60528 nlockmgr
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    100003 4 tcp 2049 nfs
    100021 1 tcp 50217 nlockmgr
    100021 3 tcp 50217 nlockmgr
    100021 4 tcp 50217 nlockmgr
    100005 1 udp 949 mountd
    100005 1 tcp 952 mountd
    100005 2 udp 949 mountd
    100005 2 tcp 952 mountd
    100005 3 udp 949 mountd
    100005 3 tcp 952 mountd

    # service nfs status

    rpc.mountd (pid 2469) is running...
    nfsd (pid 2466 2465 2464 2463 2462 2461 2460 2459)
    is running...

Additional Information

If you are using Lab Manager, do not set up NFS datastores through the VI Client on the ESX host. Unlike VMFS datastores, NFS datastores created through the VI Client are not recognized by Lab Manager. Such datastores conflict with the creation of NFS datastores through the Lab Manager Web console. For more information, see Error during the configuration of the host: NFS Error: Unable to Mount filesystem (1003803)

See Also

Update History

05/13/2010 - Added ESX and ESXi 4.0 to Products 06/15/2011 - Added ESX only steps 02/16/2012 - Added ESXi 5.0 error in Symptoms

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 17 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 17 Ratings
Actions
KB: