Administration Guide


[Return to Library] [Contents] [Previous Topic] [Bottom of Topic] [Next Topic] [Index]


Administering Server Machines

This chapter describes how to administer an AFS server machine. It describes the following configuration information and administrative tasks:

To learn how to install and configure a new server machine, see the AFS Quick Beginnings.

To learn how to administer the server processes themselves, see Monitoring and Controlling Server Processes.

To learn how to administer volumes, see Managing Volumes.


Summary of Instructions

This chapter explains how to perform the following tasks by using the indicated commands:
Install new binaries bos install
Examine binary check-and-restart time bos getrestart
Set binary check-and-restart time bos setrestart
Examine compilation dates on binary files bos getdate
Restart a process to use new binaries bos restart
Revert to old version of binaries bos uninstall
Remove obsolete .BAK and .OLD versions bos prune
List partitions on a file server machine vos listpart
Shutdown AFS server processes bos shutdown
List volumes on a partition vos listvldb
Move read/write volumes vos move
List a cell's database server machines bos listhosts
Add a database server machine to server CellServDB file bos addhost
Remove a database server machine from server CellServDB file bos removehost
Set authorization checking requirements bos setauth
Prevent authentication for bos, pts, and vos commands Include -noauth flag
Prevent authentication for kas commands Include -noauth flag on some commands or issue noauthentication while in interactive mode
Display all VLDB server entries vos listaddrs
Remove a VLDB server entry vos changeaddr
Reboot a server machine remotely bos exec reboot_command


Local Disk Files on a Server Machine

Several types of files must reside in the subdirectories of the /usr/afs directory on an AFS server machine's local disk. They include binaries, configuration files, the administrative database files (on database server machines), log files, and volume header files.

Note for Windows users: Some files described in this document possibly do not exist on machines that run a Windows operating system. Also, Windows uses a backslash ( \ ) rather than a forward slash ( / ) to separate the elements in a pathname.

Binaries in the /usr/afs/bin Directory

The /usr/afs/bin directory stores the AFS server process and command suite binaries appropriate for the machine's system (CPU and operating system) type. If a process has both a server portion and a client portion (as with the Update Server) or if it has separate components (as with the fs process), each component resides in a separate file.

To ensure predictable system performance, all file server machines must run the same AFS build version of a given process. To maintain consistency easily, use the Update Server process to distribute binaries from a binary distribution machine of each system type, as described further in Binary Distribution Machines.

It is best to keep the binaries for all processes in the /usr/afs/bin directory, even if you do not run the process actively on the machine. It simplifies the process of reconfiguring machines (for example, adding database server functionality to an existing file server machine). Similarly, it is best to keep the command suite binaries in the directory, even if you do not often issue commands while working on the server machine. It enables you to issue commands during recovery from server and machine outages.

The following lists the binary files in the /usr/afs/bin directory that are directly related to the AFS server processes or command suites. Other binaries (for example, for the klog command) sometimes appear in this directory on a particular file server machine's disk or in an AFS distribution.

backup
The command suite for the AFS Backup System (the binary for the Backup Server is buserver).

bos
The command suite for communicating with the Basic OverSeer (BOS) Server (the binary for the BOS Server is bosserver).

bosserver
The binary for the Basic OverSeer (BOS) Server process.

buserver
The binary for the Backup Server process.

fileserver
The binary for the File Server component of the fs process.

kas
The command suite for communicating with the Authentication Server (the binary for the Authentication Server is kaserver).

kaserver
The binary for the Authentication Server process.

ntpd
The binary for the Network Time Protocol Daemon (NTPD). AFS redistributes this binary and uses the runntp program to configure and initialize the NTPD process.

ntpdc
A debugging utility furnished with the ntpd program.

pts
The command suite for communicating with the Protection Server process (the binary for the Protection Server is ptserver).

ptserver
The binary for the Protection Server process.

runntp
The binary for the program used to configure NTPD most appropriately for use with AFS.

salvager
The binary for the Salvager component of the fs process.

udebug
The binary for a program that reports the status of AFS's distributed database technology, Ubik.

upclient
The binary for the client portion of the Update Server process.

upserver
The binary for the server portion of the Update Server process.

vlserver
The binary for the Volume Location (VL) Server process.

volserver
The binary for the Volume Server component of the fs process.

vos
The command suite for communicating with the Volume and VL Server processes (the binaries for the servers are volserver and vlserver, respectively).

Common Configuration Files in the /usr/afs/etc Directory

The directory /usr/afs/etc on every file server machine's local disk contains configuration files in ASCII and machine-independent binary format. For predictable AFS performance throughout a cell, all server machines must have the same version of each configuration file:

Never directly edit any of the files in the /usr/afs/etc directory, except as directed by instructions for dealing with emergencies. In normal circumstances, use the appropriate bos commands to change the files. The following list includes pointers to instructions.

The files in this directory include:

CellServDB
An ASCII file that names the cell's database server machines, which run the Authentication, Backup, Protection, and VL Server processes. You create the initial version of this file by issuing the bos setcellname command while installing your cell's first server machine. It is very important to update this file when you change the identity of your cell's database server machines.

The server CellServDB file is not the same as the CellServDB file stored in the /usr/vice/etc directory on client machines. The client version lists the database server machines for every AFS cell that you choose to make accessible from the client machine. The server CellServDB file lists only the local cell's database server machines, because server processes never contact processes in other cells.

For instructions on maintaining this file, see Maintaining the Server CellServDB File.

KeyFile
A machine-independent, binary-format file that lists the server encryption keys the AFS server processes use to encrypt and decrypt tickets. The information in this file is the basis for secure communication in the cell, and so is extremely sensitive. The file is specially protected so that only privileged users can read or change it.

For instructions on maintaining this file, see Managing Server Encryption Keys.

ThisCell
An ASCII file that consists of a single line defining the complete Internet domain-style name of the cell (such as abc.com). You create this file with the bos setcellname command during the installation of your cell's first file server machine, as instructed in the AFS Quick Beginnings.

Note that changing this file is only one step in changing your cell's name. For discussion, see Choosing a Cell Name.

UserList
An ASCII file that lists the usernames of the system administrators authorized to issue privileged bos, vos, and backup commands. For instructions on maintaining the file, see Administering the UserList File.

Local Configuration Files in the /usr/afs/local Directory

The directory /usr/afs/local contains configuration files that are different for each file server machine in a cell. Thus, they are not updated automatically from a central source like the files in /usr/afs/bin and /usr/afs/etc directories. The most important file is the BosConfig file; it defines which server processes are to run on that machine.

As with the common configuration files in /usr/afs/etc, you must not edit these files directly. Use commands from the bos command suite where appropriate; some files never need to be altered.

The files in this directory include the following:

BosConfig
This file lists the server processes to run on the server machine, by defining which processes the BOS Server monitors and what it does if the process fails. It also defines the times at which the BOS Server automatically restarts processes for maintenance purposes.

As you create server processes during a file server machine's installation, their entries are defined in this file automatically. The AFS Quick Beginnings outlines the bos commands to use. For a more complete description of the file, and instructions for controlling process status by editing the file with commands from the bos suite, see Monitoring and Controlling Server Processes.

NetInfo
This optional ASCII file lists one or more of the network interface addresses on the server machine. If it exists when the File Server initializes, the File Server uses it as the basis for the list of interfaces that it registers in its Volume Location Database (VLDB) server entry. See Managing Server IP Addresses and VLDB Server Entries.

NetRestrict
This optional ASCII file lists one or more network interface addresses. If it exists when the File Server initializes, the File Server removes the specified addresses from the list of interfaces that it registers in its VLDB server entry. See Managing Server IP Addresses and VLDB Server Entries.

NoAuth
This zero-length file instructs all AFS server processes running on the machine not to perform authorization checking. Thus, they perform any action for any user, even anonymous. This very insecure state is useful only in rare instances, mainly during the installation of the machine.

The file is created automatically when you start the initial bosserver process with the -noauth flag, or issue the bos setauth command to turn off authentication requirements. When you use the bos setauth command to turn on authentication, the BOS Server removes this file. For more information, see Managing Authentication and Authorization Requirements.

SALVAGE.fs
This zero-length file controls how the BOS Server handles a crash of the File Server component of the fs process. The BOS Server creates this file each time it starts or restarts the fs process. If the file is present when the File Server crashes, then the BOS Server runs the Salvager before restarting the File Server and Volume Server again. When the File Server exits normally, the BOS Server removes the file so that the Salvager does not run.

Do not create or remove this file yourself; the BOS Server does so automatically. If necessary, you can salvage a volume or partition by using the bos salvage command; see Salvaging Volumes.

salvage.lock
This file guarantees that only one Salvager process runs on a file server machine at a time (the single process can fork multiple subprocesses to salvage multiple partitions in parallel). As the Salvager initiates (when invoked by the BOS Server or by issue of the bos salvage command), it creates this zero-length file and issues the flock system call on it. It removes the file when it completes the salvage operation. Because the Salvager must lock the file in order to run, only one Salvager can run at a time.

sysid
This file records the network interface addresses that the File Server (fileserver process) registers in its VLDB server entry. When the Cache Manager requests volume location information, the Volume Location (VL) Server provides all of the interfaces registered for each server machine that houses the volume. This enables the Cache Manager to make use of multiple addresses when accessing AFS data stored on a multihomed file server machine. For further information, see Managing Server IP Addresses and VLDB Server Entries.

Replicated Database Files in the /usr/afs/db Directory

The directory /usr/afs/db contains two types of files pertaining to the four replicated databases in the cell--the Authentication Database, Backup Database, Protection Database, and Volume Location Database (VLDB):

Each database server process (Authentication, Backup, Protection, or VL Server) maintains its own database and log files. The database files are in binary format, so you must always access or alter them using commands from the kas suite (for the Authentication Database), backup suite (for the Backup Database), pts suite (for the Protection Database), or vos suite (for the VLDB).

If a cell runs more than one database server machine, each database server process keeps its own copy of its database on its machine's hard disk. However, it is important that all the copies of a given database are the same. To synchronize them, the database server processes call on AFS's distributed database technology, Ubik, as described in Replicating the AFS Administrative Databases.

The files listed here appear in this directory only on database server machines. On non-database server machines, this directory is empty.

bdb.DB0
The Backup Database file.

bdb.DBSYS1
The Backup Database log file.

kaserver.DB0
The Authentication Database file.

kaserver.DBSYS1
The Authentication Database log file.

prdb.DB0
The Protection Database file.

prdb.DBSYS1
The Protection Database log file.

vldb.DB0
The Volume Location Database file.

vldb.DBSYS1
The Volume Location Database log file.

Log Files in the /usr/afs/logs Directory

The /usr/afs/logs directory contains log files from various server processes. The files detail interesting events that occur during normal operations. For instance, the Volume Server can record volume moves in the VolserLog file. Events are recorded at completion, so the server processes do not use these files to reconstruct failed operations unlike the ones in the /usr/afs/db directory.

The information in log files can be very useful as you evaluate process failures and other problems. For instance, if you receive a timeout message when you try to access a volume, checking the FileLog file possibly provides an explanation, showing that the File Server was unable to attach the volume. To examine a log file remotely, use the bos getlog command as described in Displaying Server Process Log Files.

This directory also contains the core image files generated if a process being monitored by the BOS Server crashes. The BOS Server attempts to add an extension to the standard core name to indicate which process generated the core file (for example, naming a core file generated by the Protection Server core.ptserver). The BOS Server cannot always assign the correct extension if two processes fail at about the same time, so it is not guaranteed to be correct.

The directory contains the following files:

AuthLog
The Authentication Server's log file.

BackupLog
The Backup Server's log file.

BosLog
The BOS Server's log file.

FileLog
The File Server's log file.

SalvageLog
The Salvager's log file.

VLLog
The Volume Location (VL) Server's log file.

VolserLog
The Volume Server's log file.

core.process
If present, a core image file produced as an AFS server process on the machine crashed (probably the process named by process).
Note:To prevent log files from growing unmanageably large, restart the server processes periodically, particularly the database server processes. To avoid restarting the processes, use the UNIX rm command to remove the file as the process runs; it re-creates it automatically.

Volume Headers on Server Partitions

A partition that houses AFS volumes must be mounted at a subdirectory of the machine's root ( / ) directory (not, for instance under the /usr directory). The file server machine's file system registry file (/etc/fstab or equivalent) must correctly map the directory name and the partition's device name. The directory name is of the form /vicepindex, where each index is one or two lowercase letters. By convention, the first AFS partition on a machine is mounted at /vicepa, the second at /vicepb, and so on. If there are more than 26 partitions, continue with /vicepaa, /vicepab and so on. The AFS Release Notes specifies the number of supported partitions per server machine.

Do not store non-AFS files on AFS partitions. The File Server and Volume Server expect to have available all of the space on the partition.

The /vicep directories contain two types of files:

Vvol_ID.vol
Each such file is a volume header. The vol_ID corresponds to the volume ID number displayed in the output from the vos examine, vos listvldb, and vos listvol commands.

FORCESALVAGE
This zero-length file triggers the Salvager to salvage the entire partition. The AFS-modified version of the fsck program creates this file if it discovers corruption.
Note:For most system types, it is important never to run the standard fsck program provided with the operating system on an AFS file server machine. It removes all AFS volume data from server partitions because it does not recognize their format.

The Four Roles for File Server Machines

In cells that have more than one server machine, not all server machines have to perform exactly the same functions. The are four possible roles a machine can assume, determined by which server processes it is running. A machine can assume more than one role by running all of the relevant processes. The following list summarizes the four roles, which are described more completely in subsequent sections.

If a cell has a single server machine, it assumes the simple file server and database server roles. The instructions in the AFS Quick Beginnings also have you configure it as the system control machine and binary distribution machine for its system type, but it does not actually perform those functions until you install another server machine.

It is best to keep the binaries for all of the AFS server processes in the /usr/afs/bin directory, even if not all processes are running. You can then change which roles a machine assumes simply by starting or stopping the processes that define the role.

Simple File Server Machines

A simple file server machine runs only the server processes that store and deliver AFS files to client machines, monitor process status, and pick up binaries and configuration files from the cell's binary distribution and system control machines.

In general, only cells with more than three server machines need to run simple file server machines. In cells with three or fewer machines, all of them are usually database server machines (to benefit from replicating the administrative databases); see Database Server Machines.

The following processes run on a simple file server machine:

Database Server Machines

A database server machine runs the four processes that maintain the AFS replicated administrative databases: the Authentication Server, Backup Server, Protection Server, and Volume Location (VL) Server, which maintain the Authentication Database, Backup Database, Protection Database, and Volume Location Database (VLDB), respectively. To review the functions of these server processes and their databases, see AFS Server Processes and the Cache Manager.

If a cell has more than one server machine, it is best to run more than one database server machine, but more than three are rarely necessary. Replicating the databases in this way yields the same benefits as replicating volumes: increased availability and reliability of information. If one database server machine or process goes down, the information in the database is still available from others. The load of requests for database information is spread across multiple machines, preventing any one from becoming overloaded.

Unlike replicated volumes, however, replicated databases do change frequently. Consistent system performance demands that all copies of the database always be identical, so it is not possible to record changes in only some of them. To synchronize the copies of a database, the database server processes use AFS's distributed database technology, Ubik. See Replicating the AFS Administrative Databases.

It is critical that the AFS server processes on every server machine in a cell know which machines are the database server machines. The database server processes in particular must maintain constant contact with their peers in order to coordinate the copies of the database. The other server processes often need information from the databases. Every file server machine keeps a list of its cell's database server machines in its local /usr/afs/etc/CellServDB file. Cells that use the States edition of AFS can use the system control machine to distribute this file (see The System Control Machine).

The following processes define a database server machine:

Database server machines can also run the processes that define a simple file server machine, as listed in Simple File Server Machines. One database server machine can act as the cell's system control machine, and any database server machine can serve as the binary distribution machine for its system type; see The System Control Machine and Binary Distribution Machines.

Binary Distribution Machines

A binary distribution machine stores and distributes the binary files for the AFS processes and command suites to all other server machines of its system type. Each file server machine keeps its own copy of AFS server process binaries on its local disk, by convention in the /usr/afs/bin directory. For consistent system performance, however, all server machines must run the same version (build level) of a process. For instructions for checking a binary's build level, see Displaying A Binary File's Build Level. The easiest way to keep the binaries consistent is to have a binary distribution machine of each system type distribute them to its system-type peers.

The process that defines a binary distribution machine is the server portion of the Update Server (upserver process). The client portion of the Update Server (upclientbin process) runs on the other server machines of that system type and references the binary distribution machine.

Binary distribution machines usually also run the processes that define a simple file server machine, as listed in Simple File Server Machines. One binary distribution machine can act as the cell's system control machine, and any binary distribution machine can serve as a database server machine; see The System Control Machine and Database Server Machines.

The System Control Machine

In cells that run the United States edition of AFS, the system control machine stores and distributes system configuration files shared by all of the server machines in the cell. Each file server machine keeps its own copy of the configuration files on its local disk, by convention in the /usr/afs/etc directory. For consistent system performance, however, all server machines must use the same files. The easiest way to keep the files consistent is to have the system control machine distribute them. You make changes only to the copy stored on the system control machine, as directed by the instructions in this document. The United States edition of AFS is available to cells in the United States and Canada and to selected institutions in other countries, as determined by United States government regulations.

Cells that run the international version of AFS do not use the system control machine to distribute system configuration files. Some of the files contain information that is too sensitive to cross the network unencrypted, and United States government regulations forbid the export of the necessary encryption routines in the form that the Update Server uses. You must instead update the configuration files on each file server machine individually. The bos commands that you use to update the files encrypt the information using an exportable form of the encryption routines.

For a list of the configuration files stored in the /usr/afs/etc directory, see Common Configuration Files in the /usr/afs/etc Directory.

The AFS Quick Beginnings configures a cell's first server machine as the system control machine. If you wish, you can reassign the role to a different machine that you install later, but you must then change the client portion of the Update Server (upclientetc) process running on all other server machines to refer to the new system control machine.

The following processes define the system control machine:

The system control machine can also run the processes that define a simple file server machine, as listed in Simple File Server Machines. It can also server as a database server machine, and by convention acts as the binary distribution machine for its system type. A single upserver process can distribute both configuration files and binaries. See Database Server Machines and Binary Distribution Machines.

To locate database server machines

  1. Issue the bos listhosts command.
       % bos listhosts <machine name>
    

    The machines listed in the output are the cell's database server machines. For complete instructions and example output, see To display a cell's database server machines.

  2. (Optional) Issue the bos status command to verify that a machine listed in the output of the bos listhosts command is actually running the processes that define it as a database server machine. For complete instructions, see Displaying Process Status and Information from the BosConfig File.
       % bos status <machine name> buserver kaserver ptserver vlserver
    

    If the specified machine is a database server machine, the output from the bos status command includes the following lines:

       Instance buserver, currently running normally.
       Instance kaserver, currently running normally.
       Instance ptserver, currently running normally.
       Instance vlserver, currently running normally.
    

To locate the system control machine

  1. Issue the bos status command for any server machine. Complete instructions appear in Displaying Process Status and Information from the BosConfig File.
       % bos status <machine name> upserver upclientbin upclientetc -long
    

    The output you see depends on the machine you have contacted: a simple file server machine, the system control machine, or a binary distribution machine. See Interpreting the Output from the bos status Command.

To locate the binary distribution machine for a system type

  1. Issue the bos status command for a file server machine of the system type you are checking (to determine a machine's system type, issue the fs sysname or sys command as described in Displaying and Setting the System Type Name. Complete instructions for the bos status command appear in Displaying Process Status and Information from the BosConfig File.
       % bos status <machine name> upserver upclientbin upclientetc -long
    

    The output you see depends on the machine you have contacted: a simple file server machine, the system control machine, or a binary distribution machine. See Interpreting the Output from the bos status Command.

Interpreting the Output from the bos status Command

Interpreting the output of the bos status command is most straightforward for a simple file server machine. There is no upserver process, so the output includes the following message:

   bos: failed to get instance info for 'upserver' (no such entity)

A simple file server machine runs the upclientbin process, so the output includes a message like the following. It indicates that fs7.abc.com is the binary distribution machine for this system type.

   Instance upclientbin, (type is simple) currently running normally.
   Process last started at Wed Mar 10  23:37:09 1999 (1 proc start)
   Command 1 is '/usr/afs/bin/upclient fs7.abc.com -t 60 /usr/afs/bin'

If you run the United States edition of AFS, a simple file server machine also runs the upclientetc process, so the output includes a message like the following. It indicates that fs1.abc.com is the system control machine.

   Instance upclientetc, (type is simple) currently running normally.
   Process last started at Mon Mar 22  05:23:49 1999 (1 proc start)
   Command 1 is '/usr/afs/bin/upclient fs1.abc.com -t 60 /usr/afs/etc'

The Output on the System Control Machine

If you run the United States edition of AFS and have issued the bos status command for the system control machine, the output includes an entry for the upserver process similar to the following:

   Instance upserver, (type is simple) currently running normally.
   Process last started at Mon Mar 22 05:23:54 1999 (1 proc start)
   Command 1 is '/usr/afs/bin/upserver'

If you are using the default configuration recommended in the AFS Quick Beginnings, the system control machine is also the binary distribution machine for its system type, and a single upserver process distributes both kinds of updates. In that case, the output includes the following messages:

   bos: failed to get instance info for 'upclientbin' (no such entity)
   bos: failed to get instance info for 'upclientetc' (no such entity)

If the system control machine is not a binary distribution machine, the output includes an error message for the upclientetc process, but a complete a listing for the upclientbin process (in this case it refers to the machine fs5.abc.com as the binary distribution machine):

   Instance upclientbin, (type is simple) currently running normally.
   Process last started at Mon Mar 22  05:23:49 1999 (1 proc start)
   Command 1 is '/usr/afs/bin/upclient fs5.abc.com -t 60 /usr/afs/bin'
   bos: failed to get instance info for 'upclientetc' (no such entity)

The Output on a Binary Distribution Machine

If you have issued the bos status command for a binary distribution machine, the output includes an entry for the upserver process similar to the following and error message for the upclientbin process:

   Instance upserver, (type is simple) currently running normally.
   Process last started at Mon Apr 5 05:23:54 1999 (1 proc start)
   Command 1 is '/usr/afs/bin/upserver'
   bos: failed to get instance info for 'upclientbin' (no such entity)

Unless this machine also happens to be the system control machine, a message like the following references the system control machine (in this case, fs3.abc.com):

   Instance upclientetc, (type is simple) currently running normally.
   Process last started at Mon Apr 5 05:23:49 1999 (1 proc start)
   Command 1 is '/usr/afs/bin/upclient fs3.abc.com -t 60 /usr/afs/etc'

Administering Database Server Machines

This section explains how to administer database server machines. For installation instructions, see the AFS Quick Beginnings.

Replicating the AFS Administrative Databases

There are several benefits to replicating the AFS administrative databases (the Authentication, Backup, Protection, and Volume Location Databases), as discussed in Replicating the AFS Administrative Databases. For correct cell functioning, the copies of each database must be identical at all times. To keep the databases synchronized, AFS uses library of utilities called Ubik. Each database server process runs an associated lightweight Ubik process, and client-side programs call Ubik's client-side subroutines when they submit requests to read and change the databases.

Ubik is designed to work with minimal administrator intervention, but there are several configuration requirements, as detailed in Configuring the Cell for Proper Ubik Operation. The following brief overview of Ubik's operation is helpful for understanding the requirements. For more details, see How Ubik Operates Automatically.

Ubik is designed to distribute changes made in an AFS administrative database to all copies as quickly as possible. Only one copy of the database, the synchronization site, accepts change requests from clients; the lightweight Ubik process running there is the Ubik coordinator. To maintain maximum availability, there is a separate Ubik coordinator for each database, and the synchronization site for each of the four databases can be on a different machine. The synchronization site for a database can also move from machine to machine in response to process, machine, or network outages.

The other copies of a database, and the Ubik processes that maintain them, are termed secondary. The secondary sites do not accept database changes directly from client-side programs, but only from the synchronization site.

After the Ubik coordinator records a change in its copy of a database, it immediately sends the change to the secondary sites. During the brief distribution period, clients cannot access any of the copies of the database, even for reading. If the coordinator cannot reach a majority of the secondary sites, it halts the distribution and informs the client that the attempted change failed.

To avoid distribution failures, the Ubik processes maintain constant contact by exchanging time-stamped messages. As long as a majority of the secondary sites respond to the coordinator's messages, there is a quorum of sites that are synchronized with the coordinator. If a process, machine, or network outage breaks the quorum, the Ubik processes attempt to elect a new coordinator in order to establish a new quorum among the highest possible number of sites. See A Flexible Coordinator Boosts Availability.

Configuring the Cell for Proper Ubik Operation

This section describes how to configure your cell to maintain proper Ubik operation.

How Ubik Operates Automatically

The following Ubik features help keep its maintenance requirements to a minimum:

How Ubik Uses Timestamped Messages

Ubik synchronizes the copies of a database by maintaining constant contact between the synchronization site and the secondary sites. The Ubik coordinator frequently sends a time-stamped guarantee message to each of the secondary sites. When the secondary site receives the message, it concludes that it is in contact with the coordinator. It considers its copy of the database to be valid until time T, which is usually 60 seconds from the time the coordinator sent the message. In response, the secondary site returns a vote message that acknowledges the coordinator as valid until a certain time X, which is usually 120 seconds in the future.

The coordinator sends guarantee messages more frequently than every T seconds, so that the expiration periods overlap. There is no danger of expiration unless a network partition or other outage actually interrupts communication. If the guarantee expires, the secondary site's copy of the database it not necessarily current. Nonetheless, the database server continues to service client requests. It is considered better for overall cell functioning that a secondary site remains accessible even if the information it is distributing is possibly out of date. Most of the AFS administrative databases do not change that frequently, in any case, and making a database inaccessible causes a timeout for clients that happen to access that copy.

As previously mentioned, Ubik's use of timestamped messages makes it vital to synchronize the clocks on database server machines. There are two ways that skewed clocks can interrupt normal Ubik functioning, depending on which clock is ahead of the others.

Suppose, for example, that the Ubik coordinator's clock is ahead of the secondary sites: the coordinator's clock says 9:35:30, but the secondary clocks say 9:31:30. The secondary sites send votes messages that acknowledge the coordinator as valid until 9:33:30. This is two minutes in the future according to the secondary clocks, but is already in the past from the coordinator's perspective. The coordinator concludes that it no longer has enough support to remain coordinator and forces election of a new coordinator. Election takes about three minutes, during which time no copy of the database accepts changes.

The opposite possibility is that a secondary site's clock (14:50:00) is ahead of the coordinator's (14:46:30). When the coordinator sends a guarantee message good until 14:47:30), it has already expired according to the secondary clock. Believing that it is out of contact with the coordinator, the secondary site stops sending votes for the coordinator and tries get itself elected as coordinator. This is appropriate if the coordinator has actually failed, but is inappropriate when there is no actual outage.

The attempt of a single secondary site to get elected as the new coordinator usually does not affect the performance of the other sites. As long as their clocks agree with the coordinator's, they ignore the other secondary site's request for votes and continue voting for the current coordinator. However, if enough of the secondary sites's clocks get ahead of the coordinator's, they can force election of a new coordinator even though the current one is actually working fine.

A Flexible Coordinator Boosts Availability

Ubik uses timestamped messages to determine when coordinator election is necessary, just as it does to keep the database copies synchronized. As long as the coordinator receives vote messages from a majority of the sites (it implicitly votes for itself), it is appropriate for it to continue as coordinator because it is successfully distributing database changes. A majority is defined as more than 50% of all database sites when there are an odd number of sites; with an even number of sites, the site with the lowest Internet address has an extra vote for breaking ties as necessary.If the coordinator is not receiving sufficient votes, it retires and the Ubik sites elect a new coordinator. This does not happen spontaneously, but only when the coordinator really fails or stops receiving a majority of the votes. The secondary sites have a built-in bias to continue voting for an existing coordinator, which prevents undue elections.

The election of the new coordinator is by majority vote. The Ubik subprocesses have a bias to vote for the site with the lowest Internet address, which helps it gather the necessary majority quicker than if all the sites were competing to receive votes themselves. During the election (which normally lasts less than three minutes), clients can read information from the database, but cannot make any changes.

Ubik's election procedure makes it possible for each database server process's coordinator to be on a different machine. For example, if the Ubik coordinators for all four processes start out on machine A and the Protection Server on machine A fails for some reason, then a different site (say machine B) must be elected as the new Protection Database Ubik coordinator. Machine B remains the coordinator for the Protection Database even after the Protection Server on machine A is working again. The failure of the Protection Server has no effect on the Authentication, Backup, or VL Servers, so their coordinators remain on machine A.

Backing Up and Restoring the Administrative Databases

The AFS administrative databases store information that is critical for AFS operation in your cell. If a database becomes corrupted due to a hardware failure or other problem on a database server machine, it likely to be difficult and time-consuming to recreate all of the information from scratch. To protect yourself against loss of data, back up the administrative databases to a permanent media, such as tape, on a regular basis. The recommended method is to use a standard local disk backup utility such as the UNIX tar command.

When deciding how often to back up a database, consider the amount of data that you are willing to recreate by hand if it becomes necessary to restore the database from a backup copy. In most cells, the databases differ quite a bit in how often and how much they change. Changes to the Authentication Database are probably the least frequent, and consist mostly of changed user passwords. Protection Database and VLDB changes are probably more frequent, as users add or delete groups and change group memberships, and as you and other administrators create or move volumes. The number and frequency of changes is probably greatest in the Backup Database, particularly if you perform backups every day.

The ease with which you can recapture lost changes also differs for the different databases:

These differences between the databases possibly suggest backing up the database at different frequencies, ranging from every few days or weekly for the Backup Database to every few weeks for the Authentication Database. On the other hand, it is probably simpler from a logistical standpoint to back them all up at the same time (and frequently), particularly if tape consumption is not a major concern. Also, it is not generally necessary to keep backup copies of the databases for a long time, so you can recycle the tapes fairly frequently.

To back up the administrative databases

  1. Log in as the local superuser root on a database server machine that is not the synchronization site. The machine with the highest IP address is normally the best choice, since it is least likely to become the synchronization site in an election.

  2. Issue the bos shutdown command to shut down the relevant server process on the local machine. For a complete description of the command, see To stop processes temporarily.

    For the -instance argument, specify one or more database server process names (buserver for the Backup Server, kaserver for the Authentication Server, ptserver for the Protection Server, or vlserver for the Volume Location Server. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens.

       # bos shutdown <machine name> -instance <instances>+ -localauth [-wait]
    

  3. Use a local disk backup utility, such as the UNIX tar command, to transfer one or more database files to tape. If the local database server machine does not have a tape device attached, use a remote copy command to transfer the file to a machine with a tape device, then use the tar command there.

    The following command sequence backs up the complete contents of the /usr/afs/db directory

       # cd /usr/afs/db
       # tar cvf  tape_device  .
    

    To back up individual database files, substitute their names for the period in the preceding tar command:

  4. Issue the bos start command to restart the server processes on the local machine. For a complete description of the command, see To start processes by changing their status flags to Run. Provide the same values for the -instance argument as in Step 2, and the -localauth flag for the same reason.
       # bos start <machine name> -instance <server process name>+ -localauth
    

To restore an administrative database

  1. Log in as the local superuser root on each database server machine in the cell.

  2. Working on one of the machines, issue the bos shutdown command once for each database server machine, to shut down the relevant server process on all of them. For a complete description of the command, see To stop processes temporarily.

    For the -instance argument, specify one or more database server process names (buserver for the Backup Server, kaserver for the Authentication Server, ptserver for the Protection Server, or vlserver for the Volume Location Server. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens.

       # bos shutdown <machine name> -instance <instances>+ -localauth [-wait]
    

  3. Remove the database from each database server machine, by issuing the following commands on each one.
       # cd /usr/afs/db
    

    For the Backup Database:

       # rm bdb.DB0
       # rm bdb.DBSYS1
    

    For the Authentication Database:

       # rm kaserver.DB0
       # rm kaserver.DBSYS1
    

    For the Protection Database:

       # rm prdb.DB0
       # rm prdb.DBSYS1
    

    For the VLDB:

       # rm vldb.DB0
       # rm vldb.DBSYS1
    

  4. Using the local disk backup utility that you used to back up the database, copy the most recently backed-up version of it to the appropriate file on the database server machine with the lowest IP address. The following is an appropriate tar command if the synchronization site has a tape device attached:
       # cd /usr/afs/db
       # tar xvf tape_device  database_file
    

    where database_file is one of the following:

  5. Working on one of the machines, issue the bos start command to restart the server process on each of the database server machines in turn. Start with the machine with the lowest IP address, which becomes the synchronization site for the Backup Database. Wait for it to establish itself as the synchronization site before repeating the command to restart the process on the other database server machines. For a complete description of the command, see To start processes by changing their status flags to Run. Provide the same values for the -instance argument as in Step 2, and the -localauth flag for the same reason.
       # bos start <machine name> -instance  <server process name>+  -localauth
    

  6. If the database has changed since you last backed it up, issue the appropriate commands from the instructions in the indicated sections to recreate the information in the restored database. If issuing pts commands, you must first obtain administrative tokens. The backup and vos commands accept the -localauth flag if you are logged in as the local superuser root, so you do not need administrative tokens. The Authentication Server always performs a separate authentication anyway, so you only need to include the -admin argument if issuing kas commands.

Installing Server Process Software

This section explains how to install new server process binaries on file server machines, how to revert to a previous version if the current version is not working properly, and how to install new disks to house AFS volumes on a file server machine.

The most frequent reason to replace a server process's binaries is to upgrade AFS to a new version. In general, installation instructions accompany the updated software, but this chapter provides an additional reference.

Each AFS server machine must store the server process binaries in a local disk directory, called /usr/afs/bin by convention. For predictable system performance, it is best that all server machines run the same build level, or at least the same version, of the server software. For instructions on checking AFS build level, see Displaying A Binary File's Build Level.

The Update Server makes it easy to distribute a consistent version of software to all server machines. You designate one server machine of each system type as the binary distribution machine by running the server portion of the Update Server (upserver process) on it. All other server machines of that system type run the client portion of the Update Server (upclientbin process) to retrieve updated software from the binary distribution machine. The AFS Quick Beginnings explains how to install the appropriate processes. For more on binary distribution machines, see Binary Distribution Machines.

When you use the Update Server, you install new binaries on binary distribution machines only. If you install binaries directly on a machine that is running the upclientbin process, they are overwritten the next time the process compares the contents of the local /usr/afs/bin directory to the contents on the system control machine, by default within five minutes.

The following instructions explain how to use the appropriate commands from the bos suite to install and uninstall server binaries.

Installing New Binaries

An AFS server process does not automatically switch to a new process binary file as soon as it is installed in the /usr/afs/bin directory. The process continues to use the previous version of the binary file until it (the process) next restarts. By default, the BOS Server restarts processes for which there are new binary files every day at 5:00 a.m., as specified in the /usr/afs/local/BosConfig file. To display or change this binary restart time, use the bos getrestart and bos setrestart commands, as described in Setting the BOS Server's Restart Times.

You can force the server machine to start using new server process binaries immediately by issuing the bos restart command as described in the following instructions.

You do not need to restart processes when you install new command suite binaries. The new binary is invoked automatically the next time a command from the suite is issued.

When you use the bos install command, the BOS Server automatically saves the current version of a binary file by adding a .BAK extension to its name. It renames the current .BAK version, if any, to the .OLD version, if there is no .OLD version already. If there is a current .OLD version, the current .BAK version must be at least seven days old to replace it.

It is best to store AFS binaries in the /usr/afs/bin directory, because that is the only directory the BOS Server automatically checks for new binaries. You can, however, use the bos install command's -dir argument to install non-AFS binaries into other directories on a server machine's local disk. See the command's reference page in the AFS Administration Reference for further information.

To install new server binaries

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Verify that the binaries are available in the source directory from which you are installing them. If the machine is also an AFS client, you can retrieve the binaries from a central directory in AFS. Otherwise, you can obtain them directly from the AFS distribution media, from a local disk directory where you previously installed them, or from a remote machine using a transfer utility such as the ftp command.

  3. Issue the bos install command for the binary distribution machine. (If you have forgotten which machine is performing that role, see To locate the binary distribution machine for a system type.)
       % bos install <machine name> <files to install>+
    

    where

    i
    Is the shortest acceptable abbreviation of install.

    machine name
    Names the binary distribution machine.

    files to install
    Names each binary file to install into the local /usr/afs/bin directory. Partial pathnames are interpreted relative to the current working directory. The last element in each pathname (the filename itself) matches the name of the file it is replacing, such as bosserver or volserver for server processes, bos or vos for commands.

    Each AFS server process other than the fs process uses a single binary file. The fs process uses three binary files: fileserver, volserver, and salvager. Installing a new version of one component does not necessarily mean that you need to replace all three.

  4. Repeat Step 3 for each binary distribution machine.

  5. (Optional) If you want to restart processes to use the new binaries immediately, wait until the upclientbin process retrieves them from the binary distribution machine. You can verify the timestamps on binary files by using the bos getdate command as described in Displaying Binary Version Dates. When the binary files are available on each server machine, issue the bos restart command, for which complete instructions appear in Stopping and Immediately Restarting Processes.

    If you are working on an AFS client machine, it is a wise precaution to have a copy of the bos command suite binaries on the local disk before restarting server processes. In the conventional configuration, the /usr/afsws/bin directory that houses the bos command binary on client machines is a symbolic link into AFS, which conserves local disk space. However, restarting certain processes (particularly the database server processes) can make the AFS filespace inaccessible, particularly if a problem arises during the restart. Having a local copy of the bos binary enables you to uninstall or reinstall process binaries or restart processes even in this case. Use the cp command to copy the bos command binary from the /usr/afsws/bin directory to a local directory such as /tmp.

    Restarting a process causes a service outage. It is best to perform the restart at times of low system usage if possible.

       % bos restart <machine name> <instances>+
    

Reverting to the Previous Version of Binaries

In rare cases, installing a new binary can cause problems serious enough to require reverting to the previous version. Just as with installing binaries, consistent system performance requires reverting every server machine back to the same version. Issue the bos uninstall command described here on each binary distribution machine.

When you use the bos uninstall command, the BOS Server discards the current version of a binary file and promotes the .BAK version of the file by removing the extension. It renames the current .OLD version, if any, to .BAK.

If there is no current .BAK version, the bos uninstall command operation fails and generates an error message. If a .OLD version still exists, issue the mv command to rename it to .BAK before reissuing the bos uninstall command.

Just as when you install new binaries, the server processes do not start using a reverted version immediately. Presumably you are reverting because the current binaries do not work, so the following instructions have you restart the relevant processes.

To revert to the previous version of binaries

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Verify that the .BAK version of each relevant binary is available in the /usr/afs/bin directory on each binary distribution machine. If necessary, you can use the bos getdate command as described in Displaying Binary Version Dates. If necessary, rename the .OLD version to .BAK

  3. Issue the bos uninstall command for a binary distribution machine. (If you have forgotten which machine is performing that role, see To locate the binary distribution machine for a system type.)
       % bos uninstall <machine name> <files to uninstall>+
    

    where

    u
    Is the shortest acceptable abbreviation of uninstall.

    machine name
    Names the binary distribution machine.

    files to uninstall
    Names each binary file in the /usr/afs/bin directory to replace with its .BAK version. The file name alone is sufficient, because the /usr/afs/bin directory is assumed.

  4. Repeat Step 3 for each binary distribution machine.

  5. Wait until the upclientbin process on each server machine retrieves the reverted from the binary distribution machine. You can verify the timestamps on binary files by using the bos getdate command as described in Displaying Binary Version Dates. When the binary files are available on each server machine, issue the bos restart command, for which complete instructions appear in Stopping and Immediately Restarting Processes.

    If you are working on an AFS client machine, it is a wise precaution to have a copy of the bos command suite binaries on the local disk before restarting server processes. In the conventional configuration, the /usr/afsws/bin directory that houses the bos command binary on client machines is a symbolic link into AFS, which conserves local disk space. However, restarting certain processes (particularly the database server processes) can make the AFS filespace inaccessible, particularly if a problem arises during the restart. Having a local copy of the bos binary enables you to uninstall or reinstall process binaries or restart processes even in this case. Use the cp command to copy the bos command binary from the /usr/afsws/bin directory to a local directory such as /tmp.

       % bos restart <machine name> <instances>+
    

Displaying Binary Version Dates

You can check the compilation dates for all three versions of a binary file in the /usr/afs/bin directory--the current, .BAK and .OLD versions. This is useful for verifying that new binaries have been copied to a file server machine from its binary distribution machine before restarting a server process to use the new binaries.

To check dates on binaries in a directory other than /usr/afs/bin, add the -dir argument. See the AFS Administration Reference.

To display binary version dates

  1. Issue the bos getdate command.
       % bos getdate <machine name> <files to check>+
    

    where

    getd
    Is the shortest acceptable abbreviation of getdate.

    machine name
    Name the file server machine for which to display binary dates.

    files to check
    Names each binary file to display.

Removing Obsolete Binary Files

When processes with new binaries have been running without problems for a number of days, it is generally safe to remove the .BAK and .OLD versions from the /usr/afs/bin directory, both to reduce clutter and to free space on the file server machine's local disk.

You can use the bos prune command's flags to remove the following types of files:

To remove obsolete binaries

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Issue the bos prune command with one or more of its flags.
       % bos prune <machine name> [-bak] [-old] [-core] [-all]
    

    where

    p
    Is the shortest acceptable abbreviation of prune.

    machine name
    Names the file server machine on which to remove obsolete files.

    -bak
    Removes all the files with a .BAK extension from the /usr/afs/bin directory. Do not combine this flag with the -all flag.

    -old
    Removes all the files a .OLD extension from the /usr/afs/bin directory. Do not combine this flag with the -all flag.

    -core
    Removes all core files from the /usr/afs/logs directory. Do not combine this flag with the -all flag

    -all
    Combines the effect of the other three flags. Do not combine it with the other three flags.

Displaying A Binary File's Build Level

For the most consistent performance on a server machine, and cell-wide, it is best for all server processes to come from the same AFS distribution. Every AFS binary includes an ASCII string that specifies its version, or build level. To display it, use the strings and grep commands, which are included in most UNIX distributions.

To display an AFS binary's build level

  1. Change to the directory that houses the binary file . If you are not sure where the binary resides, issue the which command.
       % which binary_file
       /bin_dir_path/binary_file
       % cd bin_dir_path
    

  2. Issue the strings command to extract all ASCII strings from the binary file. Pipe the output to the grep command to locate the relevant line.
       % strings ./binary_file | grep Base
    

    The output reports the AFS build level in a format like the following:

       @(#)Base configuration afsversion  build_level
    

    For example, the following string indicates the binary is from AFS 3.6 build 3.0:

       @(#)Base configuration afs3.6 3.0
    

Maintaining the Server CellServDB File

Every file server machine maintains a list of its home cell's database server machines in the local disk file /usr/afs/etc/CellServDB on its local disk. Both database server processes and non-database server processes consult the file:

The consequences of missing or incorrect information in the CellServDB file are as follows:

Note that the /usr/afs/etc/CellServDB file on a server machine is not the same as the /usr/vice/etc/CellServDB file on client machine. The client version includes entries for foreign cells as well as the local cell. However, it is important to update both versions of the file whenever you change your cell's database server machines. A server machine that is also a client needs to have both files, and you need to update them both. For more information on maintaining the client version of the CellServDB file, see Maintaining Knowledge of Database Server Machines.

Distributing the Server CellServDB File

To avoid the negative consequences of incorrect information in the /usr/afs/etc/CellServDB file, you must update it on all of your cell's server machines every time you add or remove a database server machine. The AFS Quick Beginnings provides complete instructions for installing or removing a database server machine and for updating the CellServDB file in that context. This section explains how to distribute the file to your server machines and how to make other cells aware of the changes if you participate in the AFS global name space.

If you use the United States edition of AFS, use the Update Server to distribute the central copy of the server CellServDB file stored on the cell's system control machine. If you use the international edition of AFS, instead change the file on each server machine individually. For further discussion of the system control machine and why international cells must not use it for files in the /usr/afs/etc directory, see The System Control Machine. For instructions on configuring the Update Server when using the United States version of AFS, see the AFS Quick Beginnings.

To avoid formatting errors that can cause errors, always use the bos addhost and bos removehost commands, rather than editing the file directly. You must also restart the database server processes running on the machine, to initiate a coordinator election among the new set of database server machines. This step is included in the instructions that appear in To add a database server machine to the CellServDB file and To remove a database server machine from the CellServDB file. For instructions on displaying the contents of the file, see To display a cell's database server machines.

If you make your cell accessible to foreign users as part of the AFS global name space, you also need to inform other cells when you change your cell's database server machines. The AFS Support group maintains a CellServDB file that lists all cells that participate in the AFS global name space, and can change your cell's entry at your request. For further details, see Making Your Cell Visible to Others.

Another way to advertise your cell's database server machines is to maintain a copy of the file at the conventional location in your AFS filespace, /afs/cell_name/service/etc/CellServDB.local. For further discussion, see The Third Level.

To display a cell's database server machines

  1. Issue the bos listhosts command. If you have maintained the file properly, the output is the same on every server machine, but the machine name argument enables you to check various machines if you wish.
       % bos listhosts <machine name> [<cell name>]
    

    where

    listh
    Is the shortest acceptable abbreviation of listhosts.

    machine name
    Specifies the server machine from which to display the /usr/afs/etc/CellServDB file.

    cell name
    Specifies the complete Internet domain name of a foreign cell. You must already know the name of at least one server machine in the cell, to provide as the machine name argument.

The output lists the machines in the order they appear in the CellServDB file on the specified server machine. It assigns each one a Host index number, as in the following example. There is no implied relationship between the index and a machine's IP address, name, or role as Ubik coordinator or secondary site.

   % bos listhosts fs1.abc.com
   Cell name is abc.com
       Host 1 is fs1.abc.com
       Host 2 is fs7.abc.com
       Host 3 is fs4.abc.com

The output lists machines by name rather than IP address as long as the naming service (such as the Domain Name Service or local host table) is functioning properly. To display IP addresses, login to a server machine as the local superuser root and use a text editor or display command, such as the cat command, to view the /usr/afs/etc/CellServDB file.

To add a database server machine to the CellServDB file

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Issue the bos addhost command to add each new database server machine to the CellServDB file. If you use the United States edition of AFS, specify the system control machine as machine name. (If you have forgotten which machine is the system control machine, see The Output on the System Control Machine.) If you use the international edition of AFS, repeat the command on each or your cell's server machines in turn by substituting its name for machine name.
       % bos addhost  <machine name>  <host name>+
    

    where

    addh
    Is the shortest acceptable abbreviation of addhost.

    machine name
    Names the system control machine, if you are using the United States edition of AFS. If you are using the international edition of AFS, it names each of your server machines in turn.

    host name
    Specifies the fully qualified hostname of each database server machine to add to the CellServDB file (for example: fs4.abc.com). The BOS Server uses the gethostbyname( ) routine to obtain each machine's IP address and records both the name and address automatically.

  3. Restart the Authentication Server, Backup Server, Protection Server, and VL Server on every database server machine, so that the new set of machines participate in the election of a new Ubik coordinator. The instruction uses the conventional names for the processes; make the appropriate substitution if you use different process names. For complete syntax, see Stopping and Immediately Restarting Processes.

    Important: Repeat the following command in quick succession on all of the database server machines.

       % bos restart <machine name> buserver kaserver ptserver vlserver
    

  4. Edit the /usr/vice/etc/CellServDB file on each of your cell's client machines. For instructions, see Maintaining Knowledge of Database Server Machines.

  5. If you participate in the AFS global name space, please have one of your cell's designated site contacts register the changes you have made with the AFS Product Support group.

    If you maintain a central copy of your cell's server CellServDB file in the conventional location (/afs/cell_name/service/etc/CellServDB.local), edit the file to reflect the change.

To remove a database server machine from the CellServDB file

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Issue the bos removehost command to remove each database server machine from the CellServDB file. If you use the United States edition of AFS, specify the system control machine as machine name. (If you have forgotten which machine is the system control machine, see The Output on the System Control Machine.) If you use the international edition of AFS, repeat the command on each or your cell's server machines in turn by substituting its name for machine name.
       % bos removehost <machine name>  <host name>+
    

    where

    removeh
    Is the shortest acceptable abbreviation of removehost.

    machine name
    Names the system control machine, if you are using the United States edition of AFS. If you are using the international edition of AFS, it names each of your server machines in turn.

    host name
    Specifies the fully qualified hostname of each database server machine to remove from the CellServDB file (for example: fs4.abc.com).

  3. Restart the Authentication Server, Backup Server, Protection Server, and VL Server on every database server machine, so that the new set of machines participate in the election of a new Ubik coordinator. The instruction uses the conventional names for the processes; make the appropriate substitution if you use different process names. For complete syntax, see Stopping and Immediately Restarting Processes.

    Important: Repeat the following command in quick succession on all of the database server machines.

       % bos restart <machine name> buserver kaserver ptserver vlserver
    

  4. Edit the /usr/vice/etc/CellServDB file on each of your cell's client machines. For instructions, see Maintaining Knowledge of Database Server Machines.

  5. If you participate in the AFS global name space, please have one of your cell's designated site contacts register the changes you have made with the AFS Product Support group.

    If you maintain a central copy of your cell's server CellServDB file in the conventional location (/afs/cell_name/service/etc/CellServDB.local), edit the file to reflect the change.


Managing Authentication and Authorization Requirements

This section describes how the AFS server processes guarantee that only properly authorized users perform privileged commands, by checking authorization checking and mutually authenticating with their clients. It explains how you can control authorization checking requirements on a per-machine or per-cell basis, and how to bypass mutual authentication when issuing commands.

Authentication versus Authorization

Many AFS commands are privileged in that the AFS server process invoked by the command performs it only for a properly authorized user. The server process performs the following two tests to determine if someone is properly authorized:

Controlling Authorization Checking on a Server Machine

Disabling authorization checking is a serious breach of security because it means that the AFS server processes on a file server machine performs any action for any user, even the anonymous user.

The only time it is common to disable authorization checking is when installing a new file server machine (see the AFS Quick Beginnings). It is necessary then because it is not possible to configure all of the necessary security mechanisms before performing other actions that normally make use of them. For greatest security, work at the console of the machine you are installing and enable authorization checking as soon as possible.

During normal operation, the only reason to disable authorization checking is if an error occurs with the server encryption keys, leaving the servers unable to authenticate users properly. For instructions on handling key-related emergencies, see Handling Server Encryption Key Emergencies.

You control authorization checking on each file server machine separately; turning it on or off on one machine does not affect the others. Because client machines generally choose a server process at random, it is hard to predict what authorization checking conditions prevail for a given command unless you make the requirement the same on all machines. To turn authorization checking on or off for the entire cell, you must repeat the appropriate command on every file server machine.

The server processes constantly monitor the directory /usr/afs/local on their local disks to determine if they need to check for authorization. If the file called NoAuth appears in that directory, then the servers do not check for authorization. When it is not present (the usual case), they perform authorization checking.

Control the presence of the NoAuth file through the BOS Server. When you disable authorization checking with the bos setauth command (or, during installation, by putting the -noauth flag on the command that starts up the BOS Server), the BOS Server creates the zero-length NoAuth file. When you reenable authorization checking, the BOS Server removes the file.

To disable authorization checking on a server machine

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Issue the bos setauth command to disable authorization checking.
       % bos setauth <machine name> off
    

    where

    seta
    Is the shortest acceptable abbreviation of setauth.

    machine name
    Specifies the file server machine on which server processes do not check for authorization.

To enable authorization checking on a server machine

  1. Reenable authorization checking. (No privilege is required because the machine is not currently checking for authorization.) For detailed syntax information, see the preceding section.
       % bos setauth <machine name> on
    

Bypassing Mutual Authentication for an Individual Command

Several of the server processes allow any user (not just system administrators) to disable mutual authentication when issuing a command. The server process treats the issuer as the unauthenticated user anonymous.

The facilities for preventing mutual authentication are provided for use in emergencies (such as the key emergency discussed in Handling Server Encryption Key Emergencies). During normal circumstances, authorization checking is turned on, making it useless to prevent authentication: the server processes refuse to perform privileged commands for the user anonymous.

It can be useful to prevent authentication when authorization checking is turned off. The very act of trying to authenticate can cause problems if the server cannot understand a particular encryption key, as is likely to happen in a key emergency.

To bypass mutual authentication for bos, kas, pts, and vos commands

Provide the -noauth flag which is available on many of the commands in the suites. To verify that a command accepts the flag, issue the help command in its suite, or consult the command's reference page in the AFS Administration Reference (the reference page also specifies the shortest acceptable abbreviation for the flag on each command). The suites' apropos and help commands do not themselves accept the flag.

You can bypass mutual authentication for all kas commands issued during an interactive session by including the -noauth flag on the kas interactive command. If you have already entered interactive mode with an authenticated identity, issue the (kas) noauthentication command to assume the anonymous identity.

To bypass mutual authentication for fs commands

This is not possible, except by issuing the unlog command to discard your tokens before issuing the fs command.


Adding or Removing Disks and Partitions

AFS makes it very easy to add storage space to your cell, just by adding disks to existing file server machines. This section explains how to install or remove a disk used to store AFS volumes. (Another way to add storage space is to install additional server machines, as instructed in the AFS Quick Beginnings.)

Both adding and removing a disk cause at least a brief file system outage, because you must restart the fs process to have it recognize the new set of server partitions. Some operating systems require that you shut the machine off before adding or removing a disk, in which case you must shut down all of the AFS server processes first. Otherwise, the AFS-related aspects of adding or removing a disk are not complicated, so the duration of the outage depends mostly on how long it takes to install or remove the disk itself.

The following instructions for installing a new disk completely prepare it to house AFS volumes. You can then use the vos create command to create new volumes, or the vos move command to move existing ones from other partitions. For instructions, see Creating Read/write Volumes and Moving Volumes. The instructions for removing a disk are basically the reverse of the installation instructions, but include extra steps that protect against data loss.

A server machines can house 256 AFS server partitions, each one mounted at a directory with a name of the form /vicepindex, where index is one or two lowercase letters. By convention, the first partition on a machine is mounted at /vicepa, the second at /vicepb, and so on to the twenty-sixth at /vicepz. Additional partitions are mounted at /vicepaa through /vicepaz and so on up to /vicepiv. Using the letters consecutively is not required, but is simpler.

Mount each /vicep directory directly under the local file system's root directory ( / ), not as a subdirectory of any other directory; for example, /usr/vicepa is not an acceptable location. You must also map the directory to the partition's device name in the file server machine's file systems registry file (/etc/fstab or equivalent).

These instructions assume that the machine's AFS initialization file includes the following command to restart the BOS Server after each reboot. The BOS Server starts the other AFS server processes listed in the local /usr/afs/local/BosConfig file. For information on the bosserver command's optional arguments, see its reference page in the AFS Administration Reference.

   /usr/afs/bin/bosserver &

To add and mount a new disk to house AFS volumes

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Decide how many AFS partitions to divide the new disk into and the names of the directories at which to mount them (the introduction to this section describes the naming conventions). To display the names of the existing server partitions on the machine, issue the vos listpart command. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens.
       # vos listpart <machine name> -localauth
    

    where

    listp
    Is the shortest acceptable abbreviation of listpart.

    machine name
    Names the local file server machine.

    -localauth
    Constructs a server ticket using a key from the local /usr/afs/etc/KeyFile file. The bos command interpreter presents it to the BOS Server during mutual authentication.

  3. Create each directory at which to mount a partition.
       # mkdir /vicepx[x]
    

  4. Using a text editor, create an entry in the machine's file systems registry file (/etc/fstab or equivalent) for each new disk partition, mapping its device name to the directory you created in the previous step. Refer to existing entries in the file to learn the proper format, which varies for different operating systems.

  5. If the operating system requires that you shut off the machine to install a new disk, issue the bos shutdown command to shut down all AFS server processes other than the BOS Server (it terminates safely when you shut off the machine). Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For a complete description of the command, see To stop processes temporarily.
       # bos shutdown <machine name> -localauth [-wait]
    

  6. If necessary, shut off the machine. Install and format the new disk according to the instructions provided by the disk and operating system vendors. If necessary, edit the disk's partition table to reflect the changes you made to the files system registry file in step 4; consult the operating system documentation for instructions.

  7. If you shut off the machine down in step 6, turn it on. Otherwise, issue the bos restart command to restart the fs process, forcing it to recognize the new set of server partitions. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For complete instructions for the bos restart command, see Stopping and Immediately Restarting Processes.
       # bos restart <machine name>  fs -localauth 
    

  8. Issue the bos status command to verify that all server processes are running correctly. For more detailed instructions, see Displaying Process Status and Information from the BosConfig File.
       # bos status <machine name>
    

To unmount and remove a disk housing AFS volumes

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Issue the vos listvol command to list the volumes housed on each partition of each disk you are about to remove, in preparation for removing them or moving them to other partitions. For detailed instructions, see Displaying Volume Headers.
       % vos listvol <machine name> [<partition name>] 
    

  3. Move any volume you wish to retain in the file system to another partition. You can move only read/write volumes. For more detailed instructions, and for instructions on moving read-only and backup volumes, see Moving Volumes.
       % vos move  <volume name or ID>  \
            <machine name on source> <partition name on source>  \
            <machine name on destination> <partition name on destination>
    

  4. (Optional) If there are any volumes you do not wish to retain, back them up using the vos dump command or the AFS Backup System. See Dumping and Restoring Volumes or Backing Up Data, respectively.

  5. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  6. Issue the umount command, repeating it for each partition on the disk to be removed.
       # cd /
       # umount /dev/<partition_block_device_name>
    

  7. Using a text editor, remove or comment out each partition's entry from the machine's file systems registry file (/etc/fstab or equivalent).

  8. Remove the /vicep directory associated with each partition.
       # rmdir /vicepxx
    

  9. If the operating system requires that you shut off the machine to remove a disk, issue the bos shutdown command to shut down all AFS server processes other than the BOS Server (it terminates safely when you shut off the machine). Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For a complete description of the command, see To stop processes temporarily.
       # bos shutdown <machine name> -localauth [-wait]
    

  10. If necessary, shut off the machine. Remove the disk according to the instructions provided by the disk and operating system vendors. If necessary, edit the disk's partition table to reflect the changes you made to the files system registry file in step 7; consult the operating system documentation for instructions.

  11. If you shut off the machine down in step 10, turn it on. Otherwise, issue the bos restart command to restart the fs process, forcing it to recognize the new set of server partitions. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For complete instructions for the bos restart command, see Stopping and Immediately Restarting Processes.
       # bos restart <machine name>  fs -localauth 
    

  12. Issue the bos status command to verify that all server processes are running correctly. For more detailed instructions, see Displaying Process Status and Information from the BosConfig File.
       # bos status <machine name>
    

Managing Server IP Addresses and VLDB Server Entries

The AFS support for multihomed file server machines is largely automatic. The File Server process records the IP addresses of its file server machine's network interfaces in the local /usr/afs/local/sysid file and also registers them in a server entry in the Volume Location Database (VLDB). The sysid file and server entry are identified by the same unique number, which creates an association between them.

When the Cache Manager requests volume location information, the Volume Location (VL) Server provides all of the interfaces registered for each server machine that houses the volume. This enables the Cache Manager to make use of multiple addresses when accessing AFS data stored on a multihomed file server machine.

If you wish, you can control which interfaces the File Server registers in its VLDB server entry by creating two files in the local /usr/afs/local directory: NetInfo and NetRestrict. Each time the File Server restarts, it builds a list of the local machine's interfaces by reading the NetInfo file, if it exists. If you do not create the file, the File Server uses the list of network interfaces configured with the operating system. It then removes from the list any addresses that appear in the NetRestrict file, if it exists. The File Server records the resulting list in the sysid file and registers the interfaces in the VLDB server entry that has the same unique identifier.

On database server machines, the NetInfo and NetRestrict files also determine which interfaces the Ubik database synchronization library uses when communicating with the database server processes running on other database server machines.

There is a maximum number of IP addresses in each server entry, as documented in the AFS Release Notes. If a multihomed file server machine has more interfaces than the maximum, AFS simply ignores the excess ones. It is probably appropriate for such machines to use the NetInfo and NetRestrict files to control which interfaces are registered.

If for some reason the sysid file no longer exists, the File Server creates a new one with a new unique identifier. When the File Server registers the contents of the new file, the Volume Location (VL) Server normally recognizes automatically that the new file corresponds to an existing server entry, and overwrites the existing server entry with the new file contents and identifier. However, it is best not to remove the sysid file if that can be avoided.

Similarly, it is important not to copy the sysid file from one file server machine to another. If you commonly copy the contents of the /usr/afs directory from an existing machine as part of installing a new file server machine, be sure to remove the sysid file from the /usr/afs/local directory on the new machine before starting the File Server.

There are certain cases where the VL Server cannot determine whether it is appropriate to overwrite an existing server entry with a new sysid file's contents and identifier. It then refuses to allow the File Server to register the interfaces, which prevents the File Server from starting. This can happen if, for example, a new sysid file includes two interfaces that currently are registered by themselves in separate server entries. In such cases, error messages in the /usr/afs/log/VLLog file on the VL Server machine and in the /usr/afs/log/FileLog file on the file server machine indicate that you need to use the vos changeaddr command to resolve the problem. Contact the AFS Product Support group for instructions and assistance.

Except in this type of rare error case, the only appropriate use of the vos changeaddr command is to remove a VLDB server entry completely when you remove a file server machine from service. The VLDB can accommodate a maximum number of server entries, as specified in the AFS Release Notes. Removing obsolete entries makes it possible to allocate server entries for new file server machines as required. See the instructions that follow.

Do not use the vos changeaddr command to change the list of interfaces registered in a VLDB server entry. To change a file server machine's IP addresses and server entry, see the instructions that follow.

To create or edit the server NetInfo file

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Using a text editor, open the /usr/afs/local/NetInfo file. Place one IP address in dotted decimal format (for example, 192.12.107.33) on each line. The order of entries is not significant.

  3. If you want the File Server to start using the revised list immediately, use the bos restart command to restart the fs process. For instructions, see Stopping and Immediately Restarting Processes.

To create or edit the server NetRestrict file

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Using a text editor, open the /usr/afs/local/NetRestrict file. Place one IP address in dotted decimal format on each line. The order of the addresses is not significant. Use the value 255 as a wildcard that represents all possible addresses in that field. For example, the entry 192.12.105.255 indicates that the Cache Manager does not register any of the addresses in the 192.12.105 subnet.

  3. If you want the File Server to start using the revised list immediately, use the bos restart command to restart the fs process. For instructions, see Stopping and Immediately Restarting Processes.

To display all server entries from the VLDB

  1. Issue the vos listaddrs command to display all server entries from the VLDB.
       % vos listaddrs
    

    where lista is the shortest acceptable abbreviation of listaddrs.

    The output displays all server entries from the VLDB, each on its own line. If a file server machine is multihomed, all of its registered addresses appear on the line. The first one is the one reported as a volume's site in the output from the vos examine and vos listvldb commands.

    VLDB server entries record IP addresses, and the command interpreter has the local name service (either a process like the Domain Name Service or a local host table) translate them to hostnames before displaying them. If an IP address appears in the output, it is not possible to translate it.

    The existence of an entry does not necessarily indicate that the machine that is still an active file server machine. To remove obsolete server entries, see the following instructions.

To remove obsolete server entries from the VLDB

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Issue the vos changeaddr command to remove a server entry from the VLDB.
       % vos changeaddr <original IP address> -remove
    

    where

    ch
    Is the shortest acceptable abbreviation of changeaddr.

    original IP address
    Specifies one of the IP addresses currently registered for the file server machine in the VLDB. Any of a multihomed file server machine's addresses are acceptable to identify it.

    -remove
    Removes the server entry.

To change a server machine's IP addresses

  1. Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. If the machine is the system control machine or a binary distribution machine, and you are also changing its hostname, redefine all relevant upclient processes on other server machines to refer to the new hostname. Use the bos delete and bos create commands as instructed in Creating and Removing Processes.

  3. If the machine is a database server machine, edit its entry in the /usr/afs/etc/CellServDB file on every server machine in the cell to list one of the new IP addresses. If you use the United States edition of AFS, you can edit the file on the system control machine and wait the required time (by default, five minutes) for the Update Server to distribute the changed file to all server machines.

  4. If the machine is a database server machine, issue the bos shutdown command to stop all server processes. If the machine is also a file server, the volumes on it are inaccessible during this time. For a complete description of the command, see To stop processes temporarily.
       % bos shutdown <machine name>
    

  5. Use the utilities provided with the operating system to change one or more of the machine's IP addresses.

  6. If appropriate, edit the /usr/afs/local/NetInfo file, the /usr/afs/local/NetRestrict file, or both, to reflect the changed addresses. Instructions appear earlier in this section.

  7. If the machine is a database server machine, issue the bos restart command to restart all server processes on the machine. For complete instructions for the bos restart command, see Stopping and Immediately Restarting Processes.
       % bos restart <machine name> -all
    

    At the same time, issue the bos restart command on all other database server machines in the cell to restart the database server processes only (the Authentication, Backup, Protection, and Volume Location Servers). Issue the commands in quick succession so that all of the database server processes vote in the quorum election.

       % bos restart <machine name> kaserver buserver ptserver vlserver
    

    If you are changing IP addresses on every database server machine in the cell, you must also issue the bos restart command on every file server machine in the cell to restart the fs process.

  8. If the machine is not a database server machine, issue the bos restart command to restart the fs process (if the machine is a database server, you already restarted the process in the previous step). The File Server automatically compiles a new list of interfaces, records them in the /usr/afs/local/sysid file, and registers them in its VLDB server entry.
       % bos restart <machine name> fs
    

  9. If the machine is a database server machine, edit its entry in the /usr/vice/etc/CellServDB file on every client machine in the cell to list one of the new IP addresses. Instructions appear in Maintaining Knowledge of Database Server Machines.

  10. If there are machine entries in the Protection Database for the machine's previous IP addresses, use the pts rename command to change them to the new addresses. For instructions, see Changing a Protection Database Entry's Name.

Rebooting a Server Machine

You can reboot a server machine either by typing the appropriate commands at its console or by issuing the bos exec command on a remote machine. Remote rebooting can be more convenient, because you do not need to leave your present location, but you cannot track the progress of the reboot as you can at the console. Remote rebooting is possible because the server machine's operating system recognizes the BOS Server, which executes the bos exec command, as the local superuser root.

Rebooting server machines is part of routine maintenance in some cells, and some instructions in the AFS documentation include it as a step. It is certainly not intended to be the standard method for recovering from AFS-related problems, however, but only a last resort when the machine is unresponsive and you have tried all other reasonable options.

Rebooting causes a service outage. If the machine stores volumes, they are all inaccessible until the reboot completes and the File Server reattaches them. If the machine is a database server machine, information from the databases can become unavailable during the reelection of the synchronization site for each database server process; the VL Server outage generally has the greatest impact, because the Cache Manager must be able to access the VLDB to fetch AFS data.

By convention, a server machine's AFS initialization file includes the following command to restart the BOS Server after each reboot. It starts the other AFS server processes listed in the local /usr/afs/local/BosConfig file. These instructions assume that the initialization file includes the command.

   /usr/afs/bin/bosserver &

To reboot a file server machine from its console

  1. Become the local superuser root on the machine, if you are not already, by issuing the su command.
       % su root
       Password: root_password
    

  2. Issue the bos shutdown command to shut down all AFS server processes other than the BOS Server, which terminates safely when you reboot the machine. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For a complete description of the command, see To stop processes temporarily.
       # bos shutdown <machine name> -localauth [-wait]
    

  3. Reboot the machine. On many system types, the appropriate command is shutdown, but the appropriate options vary; consult your UNIX administrator's guide.
        # shutdown
    

To reboot a file server machine remotely

  1. Verify that you are listed in the /usr/afs/etc/UserList file on the machine you are rebooting. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file.
       % bos listusers <machine name>
    

  2. Issue the bos shutdown to halt AFS server processes other than the BOS Server, which terminates safely when you turn off the machine. For a complete description of the command, see To stop processes temporarily.
       % bos shutdown <machine name>  [-wait]
    

  3. Issue the bos exec command to reboot the machine remotely.
       % bos exec <machine name> reboot_command
    

    where

    machine name
    Names the file server machine to reboot.

    reboot_command
    Is the rebooting command for the machine's operating system. The shutdown command is appropriate on many system types, but consult your operating system documentation.

[Return to Library] [Contents] [Previous Topic] [Top of Topic] [Next Topic] [Index]



© IBM Corporation 2000. All Rights Reserved