원래문서는 : http://gentoo-wiki.com/HOWTO_Share_Directories_via_NFS
NFS를 사용하려면 커널에서 지원해야 한다. 서버, 클라이언트 모두 해당된다.
커널 디렉토리로 이동하여 커널 옵션을 바꾸어 컴파일 하는 것이 필요하다.
cd /usr/src/linux
make menuconfig
설정이 완료되면 컴파일 후, 리부팅을 한다.
make && make modules_install && reboot
nfs-utils를 설치해야 함.
# emerge nfs-utils
nfs마운트를 할 디렉토리를 지정해준다. 수정해야 하는 설정파일은 /etc/exports이다.
우선 공유할 디렉토리를 만든후 설정파일을 수정한다.
# mkdir /nfsroot
IP대신에 *를 입력하면 어디서든 접속이 가능하다. 주의해야 할점은 괄호안에 옵션을 적을때
띄어쓰기를 하면 안되다는거..=='
localhost ~ # cat /etc/exports
/nfsroot *(async,no_subtree_check,no_root_squash,rw)
portmap이 설치되어있지 않다면 설치해야 한다.
localhost ~ # emerge -pv portmap
localhost ~ # emerge portmap
localhost ~ # rc-update add portmap defailt
NFS 데몬이 컴퓨터 부팅후 자동으로 시작하도록 한다.
# rc-update add nfs default
* nfs added to runlevel default
데몬을 시작하여 테스트 해보려면
# /etc/init.d/nfs start
* Caching service dependencies ... [ ok ]
* Starting portmap ... [ ok ]
* Mounting nfsd filesystem in /proc ... [ ok ]
* Starting NFS statd ... [ ok ]
* Exporting NFS directories ... [ ok ]
* Starting NFS mountd ... [ ok ]
* Starting NFS daemon ... [ ok ]
* Starting NFS smnotify ... [ ok ]
nfs서버가 설치된 피시에서 nfs마운트가 정상적으로 되는지 확인하려면
localhost ~ # netstat -au
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 0 0 *:nfs *:*
udp 0 0 *:32771 *:*
udp 0 0 *:32776 *:*
udp 0 0 *:32777 *:*
udp 0 0 *:682 *:*
udp 0 0 *:sunrpc *:*
192.168.10.111이 공유하는 /nfsroot 디렉토리를 /mnt디렉토리로 마운트한다.
localhost / # mount -t nfs 192.168.10.111:/nfsroot /mnt
마운트한 디렉토리로 이동하여 파일을 생성한다.
localhost / # cd /mnt/
localhost mnt # touch test
localhost mnt # ls
test
공유된 디렉토리를 보면 파일이 생성된것을 확인 할 수 있다.
localhost mnt # ls /nfsroot/
test
마운트를 해제하려면...
localhost ~ # umount /mnt
이하로는 =='
Options
The options can tailor the access the connecting machines have to the directory. Options are on a per-client-machine basis and are a simple comma-separated list enclosed in parenthesis after the machine they are modifying.
- ro
- (default) The client machine will have READ-ONLY access to the directory.
- rw
- The client machine will have READ/WRITE access to the directory.
- no_root_squash
- By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user "nobody" on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories. You should not specify this option without a good reason.
- no_subtree_check
- If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
- sync
- By default, all but the most recent version (version 1.11) of the exportfs command will use async behavior, telling a client machine that a file write is complete - that is, has been written to stable storage - when NFS has finished handing the write over to the filesystem. This behavior may cause data corruption if the server reboots, and the sync option prevents this. See Section 5.9 of the NFS FAQ for a complete discussion of sync and async behavior.
- async
- Opposite to sync, if not set system will default to sync option. Using async will also speed up transfers.
(descriptions taken from NFS FAQ listed below)
- insecure
- Tells the NFS server to use unpriveledged ports (ports above 1024). This may be needed to allow mounting the NFS share from MacOS X or through the nfs:/ kioslave in KDE.
hosts.allow
If you try to mount your NFS partition and get something similar to this:
# mount /mnt/nivvy
NFS Portmap: RPC: Program not registered
then it's being blocked. To unblock it, edit the following:
On the NFS server, add all IPs you want accessing your NFS shares (again)
Followed by the command: /etc/init.d/portmap restart
Automatic mounting via FSTAB
To make the mounting occur on startup, add the following line to your FSTAB:
x.x.x.x:/directory /mount_directory nfs rw 0 0
Where the variables are defined as above.
Add the nfsmount daemon to the default runlevel: rc-update add nfsmount default
nfsvers=3 is a useful mount option to include. It's required for large-file (>4GB) support:
x.x.x.x:/directory /mount_directory nfs rw,nfsvers=3 0 0
Security Implications
TODO: Information!
If you specify 'no_root_squash' in the server's /etc/exports file, anyone who gains root permissions on the client automatically has root permissions on the server within that exported directory - good for sharing Portage directories, bad if nasty people want to compile and/or run evil software on your boxes.
IP addresses are not always static, so when using numeric addresses (as opposed to DHCP names), anyone who gains that IP has access to what you've exported. Keep this in mind with confidential information.
Note: The paragraph below no longer seems to be true. The latest versions of NFS support the sec=krb5 export option, which authenticates via Kerberos 5 instead of UIDs and GIDs.
NFS also uses numeric user and group ID's, so, even if you keep the passwd files identical on your systems somehow, someone else with the right IP address can create a user on their own system that can access anybody's files. Unless you are certain it is impossible to forge an IP address on your network, you cannot depend on the normal user/group access control. For this reason, NFS is not recommended for sharing user-private data (home directories, for example).
Setting Up Firewall (Server Side)
Setting up a firewall to cover NFS ports is quite tricky because there are ports that are assigned randomly as the NFS daemon is restarted. To see what ports you need to open, type in:
# rpcinfo -p
Try restarting the NFS daemon:
# /etc/init.d/nfs restart
Then type in rpcinfo -p again. You'll see that some ports are changed. You probably note that some of these ports are static: Port 111 (tcp and udp) are for portmaps, and port 2049 (tcp and udp) are for nfs. The rest, which are equally important, are random. In order to fix this, you need to edit /etc/conf.d/nfs file to should look something like this:
# Number of servers to be started up by default
RPCNFSDCOUNT=8
# Options to pass to rpc.mountd
# ex. RPCMOUNTDOPTS="-p 32767
RPCMOUNTDOPTS="-p 32767"
# Options to pass to rpc.statd
# ex. RPCSTATDOPTS="-p 32765 -o 32766"
RPCSTATDOPTS="-p 32765 -o 32766"
# OPTIONS to pass to rpc.rquotad
# ex. RPCRQUOTADOPTS="-p 32764"
RPCRQUOTADOPTS="-p 32764"
EDITED: has not worked for me. Instead I used
# Number of servers to be started up by default
RPCNFSDCOUNT=8
# Options to pass to rpc.mountd
# ex. RPCMOUNTDOPTS="-p 32767
RPCMOUNTDOPTS="-p 4002"
# Options to pass to rpc.statd
# ex. RPCSTATDOPTS="-p 32765 -o 32766"
RPCSTATDOPTS="-p 4000"
This way, you'll fix status, mountd, and quotad ports to 32764-32767. The only task left is to fix the lock manager ports (nlockmgr).
Fixing the nlockmgr ports depends on the version of your kernel and whether or not you build NFS into the kernel or as a module.
Deduce whether or not you have NFS built in to the kernel (Y), as a module (M) or not at all (N):
zgrep CONFIG_NFSD /proc/config.gz
If Y:
mount /boot -o remount,rw
- If GRUB:
edit the file /boot/grub/grub.conf in your favorite editor.
- If LILO:
edit the file /etc/lilo.conf in your favorite editor, and then run
lilo
Within the editor, append one of these lines to your kernel options, depending on your kernel version:
lockd.nlm_udpport=4001 lockd.nlm_tcpport=4001 # for 2.6.x kernels
lockd.udpport=4001 lockd.tcpport=4001 # for 2.4.x kernels
And reboot your machine.
If M: Open /etc/modules.d/nfsd in your favorite editor. Append this line:
options lockd nlm_udpport=4001 nlm_tcpport=4001
Run
modules-update
That way, you fix the nlockmgr ports into 4001 tcp/udp.
Warning for genkernel Users: If you have compiled nfs as module the above won't work, because genkernel does not put the module options into the initrd. Unfortunately the initrd loads the nfs module. You have two options:
- Compile nfs statically and use the in-kernel method (easiest)
- Remove nfs from the MODULES_FS line of the file /usr/share/genkernel/{arch}/modules_load before starting genkernel
Adding the Firewall Rules
It's probably best that you reboot your computer to ensure that all of the appropriate daemons and modules are reloaded, then double check that the ports in use are what you expect by running rpcinfo -p. If that's all set, then add the firewall rules.
1. Save your current firewall rules iptables-save > /etc/iptables.bak
2. Open /etc/iptables.bak in your favorite text editor
3. Add the following rule(s) in appropriate order (according to your existing rules).
4. Restore all rules to be part of your current configuration iptables-restore /etc/iptables.bak
For a shorter version of the first few ports (if you want your iptables list to look smaller), you can use -m multiport instead, as follows:
-A INPUT -p tcp -m state --state NEW -m multiport --dport 111,2049,4001,32764:32767 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m multiport --dport 111,2049,4001,32764:32767 -j ACCEPT
Setting Up Firewall (Client Side)
Setting up firewall on the client side is much, much simpler. The only relevant port is 111 tcp/udp. This is the port for portmap, the only service required for client to run.
Hint
- In order for writes to work, the user you mount the share as on your local system needs to have the same user id as the owner of the files on the remote system. usermod -u can be used to make any changes needed. (Please revise if this is inaccurate).
- If you have permission denied error when trying to write, note that root cannot write to nfs directories (unless you specifically enable him to), so try as a user. Note also that you must give correct permissions to the exported directory on the server, not on the clients.
- If you have permission denied errors try setting your hosts in /etc/hosts then use the host names.
- If you still have permission denied errors do not forget to run "exportfs -ra" after you edit your /etc/hosts file.
- If you still have permission denied errors verify that both the client and server arguments are rw and not ro.
- If you get the error "mount: RPC: Unable to send; errno = Operation not permitted" it could be a client side firewall problem. I use firestarter which was blocking the request every time I tried to mount the NFS share. I had to allow outbound traffic to the server and then it worked.
- If the mount hangs (and you also see in the log following line: «portmap: server localhost not responding, timed out») then check, if the portmap-daemon is running
- If you have errors when trying to mount the NFS file systems on the client and get error messages ending in RPC: Remote system error - Connection refused check the portmap configuration files on both systems, but especially the server, to make sure that the portmapper hasn't been told to listen to localhost only. On Gentoo, the file to check is /etc/conf.d/portmap Make sure the line #PORTMAP_OPTS="-l" is either missing or commented out. In Ubuntu, the file to check is /etc/defaults/portmap and you need to make sure the line # ARGS="-i 127.0.0.1" is missing or commented out. (I don't know where other distro's put this config file)
- Remember that if the directory you are trying to mount spans multiple drives/partitions you will need to export each partition and create a separate mount point on the client for each partition, even if one partition is a 'child' of the other. (i.e. if / and /home are on separate partitions on the server, each will need to be exported separately, even though /home is a 'child' of /) If you only export the 'parent' then the child's mount point will be visible, but not any of it's contents. IOW, if a file system gets mentioned seperately in /etc/fstab, it needs a separate entry in exportfs, and a separate mount-point on the client.
- If you get the error: 'lockd: cannot monitor', you may need to add your subnet to hosts.allow as described above, or you need to make sure you've started nfsmount init.d script on client side (netmount alone will not do), which should have started your rpc.statd.
- In case you don't have the nfsmount init script, you forgot to emerge nfs-utils.
- If you are wondering if the host is providing services, rpcinfo provides useful debug information, try: rpcinfo -p host or rpcinfo -t host nfs from the client side.
- nfs over tcp is rumored to be more reliable than udp (the default), although with a claimed "small performance penalty". If the client can mount and unmount, but can't access an exported share, try adding "tcp" to the mount options in fstab.
See also