UNIX Hints & Hacks |
|||||||||||||||||||||||||||||||||||||
Chapter 5: Account Management |
|
||||||||||||||||||||||||||||||||||||
|
Every account needs a place to call home and there is more than one way to build your home in UNIX. Where a user's home directory is configured greatly depends on the individual environment.
Flavors: All
A local directory is a home directory that is locally configured on the workstation or server that is being used by the users. If a user logs in to a system they won't be going over the network to get to their files. It's your basic textbook definition. A local home directory gives a user the fastest transfer rates they can get. If there is problem with the network, their files are still accessible. Many times, users complain that the network to a file server is too slow. If there is a bottleneck anywhere, it is in the network, they say. They often ask why can't they work locally on their workstations. But this can often come at a price.
There are limitations to working local versus working remotely off a file server. Most local workstations don't have the hot-swap, fault-tolerant, spare raided drives that the newer disk arrays have that are being implemented into servers. As the size of system drives keep growing, users want to put their files onto the system drive to use all the extra space. If a disk failure were to occur on the local workstation, in most companies there would be insufficient low-level maintenance support on the workstations, versus the 24/7 high-level support that the servers have. The user might be down until the disk can be shipped, repaired, and returned. Large installations that regularly work with system engineers from various vendors usually receive special treatment and get a drive out that day or the next. When the new drive comes in, the system and its configurations have to be rebuilt or restored from tape, and all the users' files, as well. This is true providing, of course, that a backup solution was in place on the workstation to a local or remote tape device.
In some cases, when users work locally and a failure occurs, a user is right there asking for any possible disk drive that can be attached to the workstations to get him or her back up and running. There are two answers to this question depending on the type of administrator you are, who the user is, or what the environment is you are working in. You can blow one off from the cabinet and find a way to bring it back to life as you wipe the dust off your jeans; or set up an environment where everything is covered with 24/7 support and you don't have to worry about a thing. Not to mention, you won't get your suit dirty. Don't get me wrong; I have thrown away a couple pairs of nice slacks after crawling under raised floors, whereas other times, I went home with a clean T-shirt.
Flavors: All
File servers generally have a fast network interface or even multiple interfaces, fast drives, a lot of memory, multiple CPUs, a lot of disk space, and full 24/7 support. They have some level of fault tolerance built in to them with either redundant controllers, power supplies, or raid arrays with a hot spare disk that can rebuild the array on-the-fly to a new disk waiting. Another necessity of file servers are shelf spares that are stored and can be put in by you or the vendor to minimize the amount of downtime. The design includes a backup solution complete with multiple high-density tape drives in a tape jukebox or library subsystem with full support. Did I mention this was a perfect world? At minimum, use a hot pluggable raid with shelf spares at the ready. Drives are usually the first to go.
Of course if the network goes out, all access to the files are gone--not only for one user, but for all the users. Likewise, if the system were to go down for any reason, all the users would be affected. One of the best parts for the users is when they experience a system drive failure on their workstations, they can walk over to another system on the network, (if one is available and configured similarly), log in, and continue to work until their systems are fixed.
If home directories are being remotely accessed from a file server, it is best to have both systems talking the same versions of NFS. You should use a hard NFS mount instead of a soft. If the server does drop off the network for any reason, the local workstation reestablishes the mount point when the server comes back on the network. This should work in theory, but on some flavors of UNIX, the NFS mount can go into a stale state.
Home directories are a requirement for an account to function properly. Some flavors deny access to the system whereas others put you in the root directory. If a directory is not found for an account, email also does not get delivered to the account's mailbox file.
Many users have attempted to adapt the best of both worlds in an effort to work locally. Although they all worked, not all are the most efficient way to maintain your home directory.
Some administrators set up work areas local for the user on their workstations and copy all the necessary files from a path down locally in the morning, work throughout the day, and copy the files back up to the server for the nightly backups to get the files. They gain the speed in working locally on the files. Because their home directory is still going over the network, if the server goes down off the network, the user is forced to be down also. They gain only speed.
The increase in speed and capacity of removable storage has brought some users wanting to point their home directories to removable media. At the end of the day, they transfer the files up to the server to run a backup on them. These users often work from computer to computer, never staying very long at one system. Some users actually carry around external hard disk drives and consider them to be removable. They simply plug the SCSI cable into whatever computer they use for that day.
Here's a true story. (It might sound good in theory--well, maybe not. I give the administrator credit for originality.) A new Hierarchical Storage Management system (HSM) was brought into a company. After a certain percentage of a threshold limit was reached, the cache would write off to tape. If the cache didn't reach the threshold, it was set to flush the cache at the end of every night. All day long the admin was reading and writing to the HSM cache, knowing all the data would be archived off that evening. The next day, everything from cache was now on the tape, and at the end of the tape too. When he logged in to the system it took 3-5 minutes to access any single file. Needless to say, the home directory isn't pointing there anymore I'm told, but it did work.
UNIX Hints & Hacks |
|||||||||||||||||||||||||||||||||||||
Chapter 5: Account Management |
|
||||||||||||||||||||||||||||||||||||
|
© Copyright Macmillan USA. All rights reserved.