In this article we will cover:
Network File System (NFS) is one of the native ways of sharing files and applications across the network in the Linux/UNIX world. NFS is somewhat similar to Microsoft Windows File Sharing, in that it allows you to attach to a remote file system (or disk) and work with it as if it were a local drive—a handy tool for sharing files and large storage space among users.
rpc.mountd, rpc.lockd, and rpc.statd) are no longer required in this version of NFS because their functionality has been built into the serverportmap service is no longer necessary.
I have three Virtual Machines which I will use for NFS configuration of server and client. Below are the server specs of these Virtual Machines. These VMs are installed on Oracle VirtualBox running on a Linux server.
Still installing Linux manually?
I would recommend to configure one click installation using Network PXE Boot Server. Using PXE server you can install Oracle Virtual Machines or KVM based Virtual Machines or any type of physical server without any manual intervention saving time and effort.
1Requested NFS version or transport protocol is not supported.
NOTE:
On RHEL system you must have an active subscription to RHN or you can configure a local offline repository using which "yum" package manager can install the provided rpm and it's dependencies.
Install the nfs-utils package:
xxxxxxxxxx11# yum install nfs-utils
Starting with RHEL/CentOS 7.7, to configure NFS server you must use /etc/nfs.conf instead of /etc/sysconfig/nfs. Since we plan to only enable NFSv4, we will disable older NFS versions using /etc/nfs.conf.
xxxxxxxxxx81[root@centos-8 ~]# vim /etc/nfs.conf2[nfsd]3vers2=n4vers3=n5vers4=y6vers4.0=y7vers4.1=y8vers4.2=y
Optionally, disable listening for the RPCBIND, MOUNT, and NSM protocol calls, which are not necessary in the NFSv4-only case. Disable related services:
xxxxxxxxxx41[root@centos-8 ~]# systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket2Created symlink /etc/systemd/system/rpc-statd.service → /dev/null.3Created symlink /etc/systemd/system/rpcbind.service → /dev/null.4Created symlink /etc/systemd/system/rpcbind.socket → /dev/null.
After you configure NFS server, restart the NFS server to activate the changes and enable it start automatically post reboot. You can also check nfs status using systemctl status nfs-server
xxxxxxxxxx21[root@centos-8 ~]# systemctl restart nfs-server2[root@centos-8 ~]# systemctl enable nfs-server
Use the netstat utility to list services listening on the TCP and UDP protocols:
The following is an example netstat output on an NFSv4-only server; listening for RPCBIND, MOUNT, and NSM is also disabled. Here, nfs is the only listening NFS service:
xxxxxxxxxx31[root@centos-8 ~]# netstat --listening --tcp --udp | grep nfs2tcp 0 0 0.0.0.0:nfs 0.0.0.0:* LISTEN3tcp6 0 0 [::]:nfs [::]:* LISTEN
ALSO READ:
How to open a custom port manually in Linux RHEL/CentOS 7/8
The /etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:
#).\).Syntax:
xxxxxxxxxx11export host1(options1) host2(options2) host3(options3)
In this structure:
I have a folder /nfs_shares which we will share on our NFS server
xxxxxxxxxx11[root@centos-8 ~]# mkdir /nfs_shares
In this NFS configuration guide, we create NFS share /nfs_shares to world (*) with rw and no_root_squash permission
xxxxxxxxxx21[root@centos-8 ~]# cat /etc/exports2/nfs_shares *(rw,no_root_squash)
The list of supported options which we can use in /etc/exports for NFS server
x1secure: The port number from which the client requests a mount must be lower than 1024.2This permission is on by default. To turn it off, specify insecure instead34ro: Allows read-only access to the partition. This is the default permission whenever5nothing is specified explicitly67rw: Allows normal read/write access89noaccess: The client will be denied access to all directories below /dir/to/mount.10This allows you to export the directory /dir to the client and then to11specify /dir/to as inaccessible without taking away access to something like /dir/from1213root_squash: This permission prevents remote root users from having superuser (root) privileges14on remote NFS-mounted volumes. Here, squash literally means to squash the power of the remote root user1516no_root_squash: This allows root user on the NFS client host to access the NFS-mounted directory with17the same rights and privileges that the superuser would normally have.1819all_squash: Maps all User IDs (UIDs) and group IDs (GIDs) to the anonymous user. The opposite option20is no_all_squash, which is the default setting.
Once you have an /etc/exports file setup, use the exportfs command to tell the NFS server processes to refresh NFS shares.
To export all file systems specified in the /etc/exports file:
xxxxxxxxxx11[root@centos-8 ~]# exportfs -a
Use exportfs -r to refresh shares and reexport all directories (optional as we have already used exportfs -a)
xxxxxxxxxx11[root@centos-8 ~]# exportfs -r
To view and list the available NFS shares use exportfs -v
xxxxxxxxxx21[root@centos-8 ~]# exportfs -v2/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
HINT:
Every time you make any change in /etc/exports, you don't need to restart nfs-server, you can use exportfs -r to update the exports content or alternatively you can execute systemctl reload nfs-server to refresh the /etc/exports content
Here,
xxxxxxxxxx101-r: Re-exports all entries in the /etc/exports file. This synchronizes /var/lib/nfs/xtab2with the contents of /etc/exports file. For example it deletes entries from /var/lib/nfs/xtab3that are no longer in /etc/exports and removes the stale entries from the kernel export table.45-a: Exports all entries in the /etc/exports file. It can also be used to unexport the exported6file systems when used along with the u option, for example exportfs -ua78-v: Print the existing shares910-u: Unexports the directory /dir/to/mount to the host clientA
For complete list of supported options with exportfs, follow man page of exportfs
ALSO READ:
You can also create your own man page with a list of instructions for a script or a custom tool which you have created. In real time production environment it is always recommended to also create and release a man page for every script or tool we develop.
We will add all the NFS services to our firewalld rule to allow NFS server client communication.
xxxxxxxxxx61[root@centos-8 ~]# firewall-cmd --permanent --add-service mountd2success3[root@centos-8 ~]# firewall-cmd --permanent --add-service nfs4success5[root@centos-8 ~]# firewall-cmd --reload6success
mount command to access NFS shares on Linux client.-o argument to choose NFSv4 as the preferred option to mount the NFS share./nfs_shares from nfs-server(centos-8) will be mounted on /mnt on the nfs-client.xxxxxxxxxx11[root@nfs-client ~]# mount -o nfsvers=4 10.10.10.12:/nfs_shares /mnt
If I try to access NFS shares using NFSv3, as you see after waiting for the timeout period the client fails to mount the NFS share as we have restricted the NFS server to only allow NFSv4 connections.
xxxxxxxxxx21[root@nfs-client ~]# mount -o nfsvers=3 10.10.10.12:/nfs_shares /mnt2mount.nfs: No route to host
We can use mount command to list NFS mount points on nfs-client.
xxxxxxxxxx21[root@nfs-client ~]# mount | grep nfs210.10.10.12:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.12)
To remove NFS share access you can unmount the mount point
xxxxxxxxxx11[root@nfs-client ~]# umount /mnt
To access NFS shares persistently i.e. across reboots then you can add the mount point details to /etc/fstab. But be cautious before using this as it would mean that your NFS server is always accessible and it during boot stage of the NFS client, the NFS server is un-reachable then your client may fail to boot.
Add NFS mount point details in /etc/fstab in the below format. Here 10.10.10.12 is my NFS server. I have added some additional mount options rw and soft to access the NFS shares.
HINT:
Here since I have shared my NFS shares with rw permission on my NFS configuration steps hence I am using rw on client, if you have a read only NFS shares then accordingly use ro in mount options.
xxxxxxxxxx1110.10.10.12:/nfs_shares /mnt nfs rw,soft 0 0
Next execute mount -a to mount all the partitions from /etc/fstab
xxxxxxxxxx11[root@nfs-client ~]# mount -a
Check if the mount was successful and you can access NFS share on the client.
xxxxxxxxxx21[root@nfs-client ~]# mount | grep /mnt210.10.10.12:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.12)
NFSv2 and NFSv3 rely heavily on RPCs to handle communications between clients and servers. RPC services in Linux are managed by the portmap service.
The following list shows the various RPC processes that facilitate the NFS service under Linux:
rpc.lockd when queried.nfslock service.rpc.rquotad supplies the interface between NFS and the quota manager.
We will install nfs-utils and additionally we will also need rpcbind to configure NFS server (NFSv3) in Red Hat/CentOS 7/8 Linux
NOTE:
On RHEL system you must have an active subscription to RHN or you can configure a local offline repository using which "yum" package manager can install the provided rpm and it's dependencies.
xxxxxxxxxx11[root@centos-7 ~]# yum -y install nfs-utils rpcbind
On Debian and Ubuntu you should install below list of rpms
xxxxxxxxxx11# apt-get -y install nfs-common nfs-kernel-server rpcbind
We do not need any additional NFS configuration to configure NFS server (basic). But you can check /etc/sysconfig/nfs (if using RHEL/CentOS 7.6 and earlier) or /etc/nfs.conf (if using RHEL/CentOS 7.7 or higher) for any customization.
xxxxxxxxxx41[root@centos-7 ~]# systemctl enable nfs-server --now2Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.34[root@centos-7 ~]# systemctl enable rpcbind
Check nfs status of nfs-server and rpcbind services to make sure the are active and running
xxxxxxxxxx91[root@centos-7 ~]# systemctl status nfs-server2● nfs-server.service - NFS server and services3Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)4Active: active (exited) since Sat 2020-04-18 17:03:24 IST; 8s ago5Main PID: 1999 (code=exited, status=0/SUCCESS)6CGroup: /system.slice/nfs-server.service78Apr 18 17:03:24 centos-7.example.com systemd[1]: Starting NFS server and services...9Apr 18 17:03:24 centos-7.example.com systemd[1]: Started NFS server and services.
NOTE:
systemd will automatically start rpcbind (as a dependency) whenever the nfs server is started, and so you don’t need to explicitly start rpcbind separately.
xxxxxxxxxx101[root@centos-7 ~]# systemctl status rpcbind2● rpcbind.service - RPC bind service3Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)4Active: active (running) since Sat 2020-04-18 17:03:18 IST; 20s ago5Main PID: 1982 (rpcbind)6CGroup: /system.slice/rpcbind.service7└─1982 /sbin/rpcbind -w89Apr 18 17:03:18 centos-7.example.com systemd[1]: Starting RPC bind service...10Apr 18 17:03:18 centos-7.example.com systemd[1]: Started RPC bind service.
Check the netstat output for listening TCP and UDP ports
xxxxxxxxxx121[root@centos-7 ~]# netstat -ntulp | egrep nfs\|rpc2tcp 0 0 0.0.0.0:42725 0.0.0.0:* LISTEN 1991/rpc.statd3tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 1995/rpc.mountd4tcp6 0 0 :::20048 :::* LISTEN 1995/rpc.mountd5tcp6 0 0 :::44816 :::* LISTEN 1991/rpc.statd6udp 0 0 0.0.0.0:880 0.0.0.0:* 1982/rpcbind7udp 0 0 127.0.0.1:895 0.0.0.0:* 1991/rpc.statd8udp 0 0 0.0.0.0:20048 0.0.0.0:* 1995/rpc.mountd9udp 0 0 0.0.0.0:51945 0.0.0.0:* 1991/rpc.statd10udp6 0 0 :::880 :::* 1982/rpcbind11udp6 0 0 :::20048 :::* 1995/rpc.mountd12udp6 0 0 :::42581 :::* 1991/rpc.statd
You can compare this output with NFSv4 setup, here we have more number of ports and service running with NFSv3 compared to NFSv4
Next we will create a directory which we can share over NFS server. In this NFS configuration guide, I will create a new directory /nfs_shares to share for NFS clients.
xxxxxxxxxx11[root@centos-7 ~]# mkdir /nfs_shares
The syntax and procedure to create NFS share is same between NFSv4 and NFSv3
Syntax:
xxxxxxxxxx11export host1(options1) host2(options2) host3(options3)
In this structure:
In this NFS configuration guide, we create NFS share /nfs_shares to world (*) with rw and no_root_squash permission
xxxxxxxxxx21[root@centos-7 ~]# cat /etc/exports2/nfs_shares *(rw,no_root_squash)
The list of options supported with NFSv3 configuration remains same as I shared under NFSV4 section of this article.
Once you configure NFS server and have an /etc/exports file setup, use the exportfs command to tell the NFS server processes to refresh NFS shares.
To export all file systems specified in the /etc/exports file:
xxxxxxxxxx11[root@centos-7 ~]# exportfs -a
HINT:
I prefer to use exportfs -r as re-exports are the shares. The list of options supported with exportfs between NFSv3 and NFSv4 are same which I shared above in this article.
List the currently exported NFS shares on the server. This command will also show the default permissions applied to the NFS share.
xxxxxxxxxx21[root@centos-7 ~]# exportfs -v2/nfs_shares (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
You can get the list of NFS and rpcbind ports used by NFSv3 from the netstat output we shared, instead we will use service list to allow firewall access for NFSV3
xxxxxxxxxx61[root@centos-7 ~]# firewall-cmd --permanent --add-service nfs2success3[root@centos-7 ~]# firewall-cmd --permanent --add-service mountd4success5[root@centos-7 ~]# firewall-cmd --permanent --add-service rpc-bind6success
Reload the firewall service to make the changes persistent
xxxxxxxxxx21[root@centos-8 ~]# firewall-cmd --reload2success
-o argument to choose NFSv3 as the preferred option to mount the NFS share.xxxxxxxxxx11[root@nfs-client ~]# mount -o nfsvers=3 10.10.10.2:/nfs_shares /mnt
Check if the mount was successful
xxxxxxxxxx21[root@nfs-client ~]# mount | grep /mnt210.10.10.2:/nfs_shares on /mnt type nfs (rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.10.10.2,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.10.10.2)
If I try to access NFS shares using NFSv4
xxxxxxxxxx11[root@nfs-client ~]# mount -o nfsvers=4 10.10.10.2:/nfs_shares /mnt
As you see the client was allowed to access the NFS share even with NFSv4 so you see since we have not restricted our NFS server to only use NFSv3, it is allowing NFSv4 connections also.
xxxxxxxxxx21[root@nfs-client ~]# mount | grep /mnt210.10.10.2:/nfs_shares on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.16,local_lock=none,addr=10.10.10.2)
You can use the same list of commands to list NFS mount points for NFSv3 mounts on the clients as I listed under NFSv4.
To remove NFS share access you can unmount the mount point
xxxxxxxxxx11[root@nfs-client ~]# umount /mnt
To access NFS shares persistently i.e. across reboots then you can add the mount point details to /etc/fstab. But be cautious before using this as it would mean that your NFS server is always accessible and it during boot stage of the NFS client, the NFS server is un-reachable then your client may fail to boot.
Add NFS mount point details in /etc/fstab in the below format. Here 10.10.10.2 is my NFS server. I have added some additional mount options other than defaults, such as defaults, soft and nfsvers=3 to access the NFS shares only with v3 protocol.
xxxxxxxxxx1110.10.10.2:/nfs_shares /mnt nfs defaults,soft,nfsvers=3 0 0
Next execute mount -a to mount all the partitions from /etc/fstab
xxxxxxxxxx11[root@nfs-client ~]# mount -a
Check if the mount was successful and you can access NFS share on the client.
xxxxxxxxxx21[root@nfs-client ~]# mount | grep /mnt210.10.10.2:/nfs_shares on /mnt type nfs (rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.10.10.2,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.10.10.2)
Lastly I hope the steps from the article to install and configure NFS server and client using NFSv3 and NFSv4 on Red Hat and CentOS 7/8 Linux was helpful. So, let me know your suggestions and feedback using the comment section.
*References:* Configure NFS Server with NFSv3 and NFSv4 in RHEL 8 NFS wiki page Linux Administration: Network File System (NFS)
Related Searches: centos nfs server, how to setup nfs share, centos 7 install nfs server, how to check nfs status in linux, how to check if nfs server is running on linux, nfs in linux tutorial, nfs configuration in rhel 7 step by step, install and configure NFS server and client