Linux Network Administrator Manual (11) Chapter 11 Network File System NFS

zhaozj2021-02-11  205

Linux Network Administrator Manual (11)

2000-07-29 11:50

Publisher: NetBull Readings: 1156 Translation: Zhao Wei GoHigh@shtdu.edu.cn Chapter 11 Network File System NFS, Network FileSystem), perhaps the use of RPC's most prominent network services. It allows you to access files on the remote host to access any local files. This is possible by mixing the client's kernel function (which uses remote file system) with the server-side NFS server function (which provides file data) phase. This file access is completely transparent to the customer and can work on various servers and various host structures. Many advantages provided by NFS:  Accessible data accessed by all users can be stored on a central host, and the customer loads this directory when booting. For example, you can put all users' accounts on a host, let all users on your network load the / home directory from that host. If NIS is also installed, the user can log in into any system and work on a set of files.  Data that takes a lot of disk space can be saved on a host. For example, all files and programs related to Latex and MetaFont can be saved and maintained in one place.  Managed data can be stored on a single host. No longer need to use RCP to install the same file to 20 different machines. NFS is largely made by Rick Sladkey, [1] He prepared NFS kernel code and most of the NFS server. The latter is derived from the UNFSD USER-SPACE NFS server prepared by Mark Shand and the HNFS Harris NFS server prepared by Donald Becker. Now let's watch how NFS work: A customer can request to load a directory of a remote host to a local directory, just as the same method is loaded with a physical device. However, syntax for specifying remote directories is different. For example, for the host VLAGER / home loaded on the Vale / Users, the administrator should issue the following commands on VALE: [2] # mount -t nfs vlager: / home / users At this time, Mount will try to contact RPC. The MountD Mount background program on the VLAGER. The server will check if the VALE allows you to load the directory considered. If you are, a file handle is returned (File Handle). This file handle will be used for all subsequent requests for files under / users. When someone accesses a file through NFS, the kernel sends an RPC to NFSD (NFS background program) on the server machine. This call will use the file handle, the file name to be accessed, and the user's USER and Group ID as a parameter. These are used to determine access to the specified file. To avoid unauthorized users to read or modify files, the USER and Group IDs must be the same on both hosts. In many UN * X implementations, NFS customers and server functions are implemented as a background program of the kernel layer, which is started from user space when system boot. NFS Background Program (NFSD) On the server host, the Block I / O background program (BIOD) is on the client host. In order to improve the throughput, BIOD uses read-ahead and post-behind to perform synchronous I / O; the same, several NFSD background programs are usually running concurrently. NFS's Linux implementation is slightly different, and customer code is closely integrated into the kernel's virtual file system (VFS) and does not need to perform additional controls via Biod.

On the other hand, the server code is completely running in the user space, so several copies of the server are almost impossible - because this will involve synchronization issues. At present, Linux's NFS lacks pre-read and post-write mechanisms, but the Rick Sladkey plans to add in the future. [3] The biggest problem with Linux's NFS code is that the version 1.0 inner core of Linux cannot be assigned a memory block greater than 4K; the result is that the network code cannot handle the data reported to 3,500 bytes after the number of data such as the head size. This means that the data returned from the NFS background program that uses the NFS background program that is running on the NFS background program that uses the default large UDP datagram (for example, 8k on SunOS), requiring human worktry to cut into small pieces. This will seriously damage system performance in some environments. [4] This limitation has no longer exists in the nearest Linux-1.1 core, and the customer code has also been modified to overcome this problem. 11.1 Preparing NFS before you can use NFS, whether it is a server or a customer, you must be sure that your kernel has compiled NFS support. In this regard, the newer kernel has a simple interface on the PROC file system, ie / proc / filesystems file, you can use CAT to display it: $ cat / proc / filesystems minix ext2 msdos nodev proc Nodev NFS If this list Without NFS, then you have to compile your own enabled kernel. Configuring the kernel network options explained in the "Kernel Configuration" section in Chapter 3. For Linux 1.1 previous kernels, to find out if your kernel is enabled by NFS support is to actually try to load an NFS file system. For this, you can build a directory under / TMP and try to load a local directory: # mkdir / tmp / test # mount localhost: / etc / tmp / test If this load trial failed, there is an error Information points to "FS Type NFS No Supported by Kernel", then you must make a new kernel that enables NFS. Any other error information is completely irrelevant because you have not configured NFS background programs on your host. 11.2 Load an NFS Volume NFS Volume [5] The loading method is the same as the normal file system. You use the following syntax call mount: # mount -t nfs nfs_volume local_dir options NFS_VOLUME is given in the form of Remote_host: Remote_Dir. Since this representation is unique to the NFS file system, you can omit the -t NFS option. When loading an NFS volume, you can specify a lot of other options. These options can be given on the -o switch on the command line, or give it within the option field of / etc / fstab. In both cases, multiple options are separated from each other with comma. Options on the command line always override the options given in the FSTAB file. One of the sample entries in / etc / fstab can be # volume mount point type options: / usr / spool / news / usr / spool / news nfs timeo = 14, INTR then use the following command to load # mount News: / usr / spool / news If there is no corresponding entry in Fstab, then NFS's mount call looks a bit chaos. For example, if you load your user home directory from a machine called Moonshot, the Moonshot machine uses a default 4K size block for read / write operations.

You can issue the following command to reduce the size of 2K to adapt to Linux's datagram size. A complete description of all valid options in the NFS (5) man page found in the Rik Faith Util-Linux package is found. Below is an incomplete list of these options you might want to use: rsize = n and wsize = n This points out the size of the datagram for the NFS client read and write request, respectively. Due to the limitations of the above UDP datagrams, their current default is 1024 bytes. Timeo = n This is used to set the NFS client will wait for a wait time (in very second). The default is 0.7 seconds. Hard is clearly marked with this volume is hard-mounted. The default is ON. Soft-mount driver (opposite to hard load). INTR allows notifications to break an NFS call. It is useful for the abnormality that is not responding to the server. In addition to RSize and WSIZE, all of these options apply to the corresponding action of the client when the server temporarily becomes accessible. They act together in the following ways: Whenever a customer wants the NFS server to send a request, it expects to operate in a given time interval (specified by the Timeout option). If you don't receive a confirmation within this time, a so-called times timeout will occur, the operation is retired and the time is doubled. When you reach the maximum timeout time of 60 seconds, a master timeout occurs. By default, a primary email will cause the client program to print a message on the console and restart again. This time, the time value is doubled again. Potentially, this will continue. Stubbornly repeats a single operation until the server is available again as a hard-mounted. Conversely, whenever a primary timeout occurs, the soft-mounted volume generates an I / O error message for the calling process. Since the buffer introduces after writing, the error situation is not propagated to the process itself next time, and therefore, a program is completely unclear whether the write operation of a soft load volume has been completed. . Hard loading or soft loading a volume is not only a feeling (habits) problem, but also related to what information you want to access from this volume. For example, if you load your X programs through NFS, you certainly don't want your X session just because someone will open 7 XV causing the network almost interrupted, or because of the moment, install the Ethernet plug, the X session instant stop. By loading this volume, you can confident that your computer will wait until you can establish a connection to your NFS server. On the other hand, non-critical data such as NFS loaded NEWS partitions or FTP document programs can also be soft loaded, then you will not hang your session procedure when you temporarily connect or close or close. If the server's network connection is very fragile or to pass a heavy load router, then you or use the Timeo option to increase the initial timeout value, or hard to load this volume, but to allow the signal to interrupt NFS calls, so you can still terminate Any hangs accessed. Typically, the MountD background program keeps which directory that has been loaded by the host. This information can be displayed using the ShowMount program that is also included in the NFS server package. However, now Linux MountD can't do this.

11.3 NFS Background Program (DAEMONS) If you want to provide NFS services for other hosts, you must run NFSD and MountD background programs on your machine. As a RPC-based program, they are not managed by inetd, but started during the boot and registered with Portmapper. So you must be sure that only starts running them after RPC.Portmap runs. Often, in your rc.inet2 script, you want to include both lines: if [-x /usr/sbin/rpc.mountd]; life /usr/sbin/rpc.mountd; echo -n "mountd" fi = X /USR/SBIN/rpc.nfsd]; ThenusR/sbin/rpc.nfsd; echo -n "nfsd" Fi NFS background program is usually only a value of User and User information provided by its client. Group ID. If both the user and the name of the user and the server are associated with these numerical IDs, they are called their share of the same UID / GID space. This is the case, for example, when you use NIS distribution passwd information for all hosts on your LAN. However, in some cases they do not match. You don't have to update your client's UID and GID to match the server, you can use the UGIDD map to process the background. With the map_daemon options described below, you can inform NFSD to map the server's UID / GID space to the customer's UID / GID space with the client UGIDD. UgIDD is a RPC-based server and is started from rc.inet2 if NFSD and MountD are started. IF [-x /usr/sbin/rpc.ugidd]; ThenusR/sbin/rpc.ugidd; Echo -n "UgIDD" FI 11.4 Exports files The above options are used for customers' NFS configuration, one end of the server There are different options set to set the behavior of each customer. These options must be set in / etc / exceds. By default, Mountd does not allow anyone to load directory from a local host, which is a very sensitive attitude. In order to allow one or more hosts to load a directory, this directory must be exported, i.e., it must be specified in the exports file. A sample file looks like this: # exports file for VLAGER / Home Vale (RW) VStout (RW) VLight (RW) / usr / x386 Vale (RO) VSTOUT (RO) VLight (RO) / USR / TEX VALE (RO) VSTOUT (RO) VLIGHT (RO) / VALE (RW, NO Root Squash) / Home / FTP (RO) defines a directory, and hosts that allow the directory to be loaded. Host names are usually a wholly-owned domain name, but can also contain * and? Wildcards, their roles are the same as in the Bourne Shell. For example, lab * .foo.com matches lab01.foo.com, also matches Laber.foo.com. If the host name is not given, it is like the / home / ftp directory in the listed column, then any host allows you to load this directory. When you check a customer host with an Exports file, MountD will use gethostbyaddr (2) to find the host name of the customer, so you must be sure that you have not used an alias in Exports. When the DNS is not used, the returned name is the first host name that matches the customer address in the HOSTS file. The host name is followed by a list of options, with a comma-separated flag, which is hosted with parentheses.

These marks can use these values: INSecure allows unauthorized access from this machine. UNIX-RPC requires this machine for UNIX domain RPC authentication. This is a simple request to generate from a reserved Internet port (i.e., the port number must be less than 1024). By default, this option is enabled (ON). Secure-RPC requires secure RCP certification for this machine. This option is not yet implemented. See SUN's documentation about security RPCs. Kerberos requires Kerberos certification to this machine. This option is not yet implemented. See MIT for documents for the Kerberos certified system. Root-Squash This is a security feature that refuses to specify any special access to the superuser on the host by mapping the customer's UID 0 request to the server's UID 65534 (-2). This UID should be associated with User Nobody. NO_ROOT_SQUSH does not map requests from UID 0. The default value of this option is ON. RO loads the file structure as read-only. This option defaults to ON. RW loads the file structure as read - write. Link-Relative The absolute symbol connection (in a slanting form) converts the absolute symbol connection (in the form of a slash) in a directory required to connect to the root. This option is only meaningful when the entire file system of the host is loaded, otherwise, some connections will not point anywhere, or even worse, pointing to them without thinking. This option is default for ON. Link-Absolute reserves all symbols to connect to the original (this is a regular behavior of the NFS server provided by Sun). Map_Daemon This option tells the NFS server to assume that the client and server do not share the same UID / GID space. At this point, NFSD will create a list of mapping IDs between customers and servers by querying the customer's UGIDD background program. When NFSD or MountD is started, the error information obtained by resolving the exports file is reported to the Syslogd background program facility on the NOTICE layer. Note that the host name is obtained from the client's IP address by reverse mapping, so you have to configure the parser correctly. If you use bind and pay special attention to security, you should activate the SPOOF check in your host.conf file. 11.5 Linux AutomounTer Sometimes, all NFS volumes that load users may access are very wasteful; one aspect is due to the number of volumes to be loaded, and on the other hand, it is a lot of time when starting. This alternative scheme is the so-called automountrounter. This is a Daemon that automatically loads any NFS rolls, and automatically uninstall them at a certain period of time. A cleverness of Automounter is that it can load a volume from another place. For example, you might save your X programs and support files on both to three hosts, and all other hosts load them via NFS. Using Automounter, you can specify all three to / usr / x386; at this time Automounter will try to load any one, until there is a load trial success. Linux is commonly used by Automounter called AMD. It was originally prepared by Jan-Simon Pendry and ported to Linux by Rick Sladkey. The current version is AMD-5.3. Discussing AMD has exceeded the scope of this book; if you want to have a good manual, see the source program; which contains a text file with detailed information. Note [1] can be linked to the rick by jrs@world.std.com. [2] Note that you can omit the -t NFS parameters, because MOUNT sees a NFS volume from the colon.

[3] The problematic problem is that the kernel cache is indexed with Device / Inode, so it cannot be used on the file system loaded by NFS. [4] As Alan Cox explains me: NFS specification requires the server to refresh the data written to the disk before returning. Because the BSD kernel can only be written (4K), and therefore the data written to 4 1k will cause the BSD-based NFS server to perform 4K write operations. [5] We didn't say a file system because these were not appropriate file systems. source:

Linux free pigeon

转载请注明原文地址:https://www.9cbs.com/read-3671.html

New Post(0)