NFS 3 client filesystem (QNX Neutrino)
fs-nfs3 [global_options] [mountpoint_options] [mountpoint_spec] [[mountpoint_options] mountpoint_spec ...]
QNX Neutrino
The global options include the following:
The mountpoint options include the following:
The default is none of these. For more information, see Pathname Management in the Process Manager chapter of the System Architecture guide.
A mountpoint_spec is in the form:
remote_host:/remote_export local_mountpoint
The components are as follows:
The fs-nfs3 filesystem manager is an NFS 3 client operating over TCP/IP. To use it, you must have an NFS server.
When you use fs-nfs3 with write caching (the default), you benefit from enhanced filesystem performance. However, there can be interoperability issues if more than one NFS client accesses the same file on the NFS server. If fs-nfs3's cached data hasn't been written to the NFS server, another NFS client attempting to read the same file won't see the changes to the file until they're written to the server at a later point. If you want fs-nfs3 to write file modifications immediately to the NFS server, use the -w sync=hard option to turn off write caching.
This filesystem manager requires a TCP/IP transport layer, such as the one provided by io-pkt*. It also needs socket.so and libc.so.
By default, this utility does not set any upper limit for number of inodes.
You can also create mountpoints with the mount command by specifying nfs for the type and -o ver3 as an option. You must start fs-nfs3 before creating mountpoints in this manner. If you start fs-nfs3 without any arguments, it runs in the background so you can use mount. The -o options that you can use with mount include the following:
If you try to access a link that has a trailing slash, fs-nfs3 immediately returns EINVAL (invalid argument) instead of resolving the link and reporting errors such as EPERM (permission denied) or ENOTDIR (not a directory). It does this to reduce network traffic, because a path that ends in a slash must be a directory, and so the access will ultimately fail anyway.
Mount the qnx_bin export as /bin from an NFS server named server_node:
fs-nfs3 server_node:/qnx_bin /bin &
Mount /nfs1 using TCP, and /nfs3 using UDP:
fs-nfs3 -t host1:/ /nfs1 host2:/ /nfs3
Mount both using TCP:
fs-nfs3 -t host1:/ /nfs1 -t host2:/ /nfs3
Mount an NFS filesystem (fs-nfs3 must be running first):
mount -t nfs -o ver3 server_node:/qnx_bin /bin
Mount an NFS filesystem, using TCP (fs-nfs3 must be running first):
mount -t nfs -o tcp,ver3 server:/tmp /mnt
If possible, you should use fs-nfs3 instead of fs-nfs2.