Workbook 9 Managing Processes

  • Published on
    04-Feb-2016

  • View
    50

  • Download
    0

DESCRIPTION

Workbook 9 Managing Processes. Pace Center for Business and Technology. Chapter1. An Introduction to Processes. Key Concepts A process is an instance of a running executable, identified by a process id ( pid ). - PowerPoint PPT Presentation

Transcript

  • Workbook 9Managing Processes

    Pace Center for Business and Technology*

  • Chapter1. An Introduction to Processes Key Concepts A process is an instance of a running executable, identified by a process id (pid). Because Linux implements virtual memory, every process possesses its own distinct memory context. A process has a uid and a collection of gid as credentials. A process has a filesystem context, including a cwd, a umask, a root directory, and a collection of open files. A process has a scheduling context, including a niceness value. A process has a collection of environment variables. The ps command can be used to examine all currently running processes. The top command can be used to monitor all running processes. *

  • Processes are How Things Get DoneAlmost anything that happens in a Linux system, happens as a process. If you are viewing this text in a web browser, that browser is running as a process. If you are typing at a bash shell's command line, that shell is running as a process. If you are using the chmod command to change a file's permissions, the chmod command operates as a separate process. Processes are how things get done, and the primary responsibility of the Linux kernel is to provide a place for processes to do their stuff without stepping on each other's toes. Processes are an instance of an executing program. In other operating systems, programs are often large, elaborate, graphical applications that take a noticeably long time to start up. In the Linux (and Unix) world, these types of programs exist as well, but so do a whole class of programs which usually have no counterpart in other operating systems. These programs are designed to be quick to start, specialized in function, and play well with others. On a Linux system, processes running these programs are constantly popping into and out of existence. *

  • Processes are How Things Get DoneFor example, consider the user maxwell performing the following command line.

    In the split second that the command line took to execute, no less four than processes (ps, grep, bash, and date) were started, did their thing, and exited.

    *

  • What is a Process?

    By this point, you could well be tired of hearing the answer: a process in an instance of a running program. Here, however, we provide a more detailed list of the components that constitute a process. Execution Context Every process exists (at least to some extent) within the physical memory of the machine. Because Linux (and Unix) is designed to be a multiuser environment, the memory allocated to a process is protected, and no other process can access it. In its memory, a process loads a copy of its executable instructions, and stores any other dynamic information it is managing. A process also carries parameters associated with how often it gets the opportunity to access the CPU, such as its execution state and its niceness value (more on these soon).

    *

  • What is a Process?

    I/O Context Every process interacts to some extent with the filesystem in order to read or write information that exists before or will exist after the lifespan of the process. Elements of a process's input/output context include the following. Open File Descriptors Almost every process is reading information from or writing information to external sources, usually both. In Linux, open file descriptors act as sources or sinks of information. Processes read information from or write information to file descriptors, which may be connected to regular files, device nodes, network sockets, or even each other as pipes (allowing interprocess communication). Memory Mapped Files Memory mapped files are files whose contents have been mapped directly into the process's memory. Rather than reading or writing to a file descriptor, the process just accesses the appropriate memory address. Memory maps are most often used to load a process's executable code, but may also be used for other types of non-sequential access to data. *

  • What is a Process?

    Filesystem Context We have encountered several pieces of information related to the filesystem that processes maintain, such as the process's current working directory (for translating relative file references) and the process's umask (for setting permissions on newly created files). [13] Environment VariablesEvery process maintains its own list of name-value pairs, referred to as environment variables, or collectively as the process's environment. Processes generally inherit their environment on startup, and may refer to it for information such as the user's preferred language or favorite editor. Heritage Information Every process is identified by a PID, or process id, which it is assigned when it is created. In a later Lesson, we will discover that every process has a clearly defined parent and possibly well defined children. A process's own identity, the identity of its children, and to some extent the identity of its siblings are maintained by the process.

    *

  • What is a Process?

    Credentials Every process runs under the context of a given user (or, more exactly, a given user id), and under the context of a collection of group id's (generally, all of the groups that the user belongs to). These credentials limit what resources a process can access, such as which files it can open or with which other processes it is allowed to communicate. Resource Statistics and Limits Every process also records statistics to track the extent to which system resources have been utilized, such as its memory size, its number of open files, its amount of CPU time, and others. The amount of many of these resources that a process is allowed to use can also be limited, a concept called resource limits.

    *

  • Viewing Processes with the ps Command

    We have already encountered the ps command many times. Now, we will attempt to familiarize ourselves with a broader selection of the many command line switches associated with it. A quick ps --help will display a summary of over 50 different switches for customizing the ps command's behavior. To complicate matters, different versions of Unix have developed their own versions of the ps command, which do not use the same command line switch conventions. The Linux version of the ps command tries to be as accommodating as possible to people from different Unix backgrounds, and often there are multiple switches for any give option, some of which start with a conventional leading hyphen (-), and some of which do not. *

  • Viewing Processes with the ps Command

    Process Selection By default, the ps command lists all processes started from a user's terminal. While reasonable when users connected to Unix boxes using serial line terminals, this behavior seems a bit minimalist when every terminal window within an X graphical environment is treated as a separate terminal. The following command line switches can be used to expand (or reduce) the processes which the ps command lists.

    *

  • Output Selection As implied by the initial paragraphs of this Lesson, there are many parameters associated with processes, too many to display in a standard terminal width of 80 columns. The following table lists common command line switches used to select what aspects of a process are listed.*

  • Output Selection Additionally, the following switches can be used to modify how the selected information is displayed. *

  • Oddities of the ps Command

    The ps command, probably more so than any other command in Linux, has oddities associated with its command line switches. In practice, users tend to experiment until they find combinations that work for them, and then stick to them. For example, the author prefers ps aux for a general purpose listing of all processes, while many people prefer ps -ef. The above tables should provide a reasonable "working set" for the novice. The command line switches tend to fall into two categories, those with the traditional leading hyphen ("Unix98" style options), and those without ("BSD" style options). Often, a given functionality will be represented by one of each. When grouping multiple single letter switches, only switches of the same style can be grouped. For example, ps axf is the same as ps a x f, not ps a x -f.

    *

  • Monitoring Processes with the top CommandThe ps command displays statistics for specified processes at the instant that the command is run, providing a snapshot of an instance in time. In contrast, the top command is useful for monitoring the general state of affairs of processes on the machine. The top command is intended to be run from within a terminal. It will replace the command line with a table of currently running processes, which updates every few seconds. The following demonstrates a user's screen after running the top command.

    *

  • Monitoring Processes with the top CommandWhile the command is running, the keyboard is "live". In other words, the top command will respond to single key presses without waiting for a return key. The following table lists some of the more commonly used keys. *

  • Monitoring Processes with the top CommandThe last two command, which either kill or renice a process, use concepts that we will cover in more detail in a later Lesson. Although most often run without command line configuration, top does support the following command line switches.

    *

  • Monitoring Processes with the gnome-system-monitor ApplicationIf running an X server, the GNOME desktop environment provides an application similar in function to top, with the benefits (and drawbacks) of a graphical application. The application can be started from the command line as gnome-system-monitor, or by selecting the System : Administration : System Monitor menu item. *

  • Monitoring Processes with the gnome-system-monitor Application

    Like the top command, the System Monitor displays a list of processes running on the local machine, refreshing the list every few seconds. In its default configuration, the System Monitor provides a much simpler interface: it lists only the processes owned by the user who started the application, and reduces the number of columns to just the process's command, owner, Process ID, and simple measures of the process's Memory and CPU utilization. Processes may be sorted by any one of these fields by simply clicking on the column's title. *

  • Monitoring Processes with the gnome-system-monitor ApplicationWhen right-clicking on a process, a pop-up menu allows the user to perform many of the actions that top allowed, such as renicing or killing a process, though again with a simpler (and not as flexible) interface.

    *

  • Monitoring Processes with the gnome-system-monitor ApplicationThe System Monitor may be configured by opening the Edit : Preferences menu selection. Within the Preferences dialog, the user may set the update interval (in seconds), and configure many more fields to be displayed.*

  • Locating processes with the pgrep Command.

    Often, users are trying to locate information about processes identified by the command they are running, or the user who is running them. One technique is to list all processes, and use the grep command to reduce the information. In the following, maxwell first looks for all instances of the sshd daemon, and then for all processes owned by the user maxwell.

    While maxwell can find the information he needs, there are some unpleasant issues. The approach is not exacting. Notice that, in the second search, a su process showed up, not because it was owned by maxwell, but because the word maxwell was one of its arguments. Similarly, the grep command itself usually shows up in the output. The compound command can be awkward to type.

    *

  • Locating processes with the pgrep Command.

    In order to address these issues, the pgrep command was created. Named pgrep for obvious reasons, the command allows users to quickly list processes by command name, user, terminal, or group. pgrep [SWITCHES] [PATTERN]Its optional argument, if supplied, is interpreted as an extended regular expression pattern to be matched against command names. The following command line switches may also be used to qualify the search. *

  • Locating processes with the pgrep Command.

    In addition, the following command line switches can be use to qualify the output formatting of the command.

    For a complete list of switches, consult the pgrep(1) man page. As a quick example, maxwell will repeat his two previous process listings, using the pgrep command.

    *

  • ExamplesChapter1. An Introduction to Processes Viewing All Processes with the "User Oriented" Format In the following transcript, maxwell uses the ps -e u command to list all processes (-e) with the "user oriented" format (u).

    The "user oriented" view displays the user who is running the process, the process id, and a rough estimate of the amount of CPU and memory the process is consuming, as well as the state of the process. (Process states will be discussed in the next Lesson). *

  • QuestionsChapter1. An Introduction to Processes 1, 2, and 3

    *

  • Chapter2 Process States Key ConceptsIn Linux, the first process, /sbin/init, is started by the kernel on bootup. All other processes are the result of a parent process duplicating itself, or forking. A process begins executing a new command through a process called execing. Often, new commands are run by a process (often a shell) first forking, and then execing. This mechanism is referred to as the fork and exec mechanism. Processes can always be found in one of five well defined states: runnable, voluntarily sleeping, involuntarily sleeping, stopped, or zombie. Process ancestry can be viewed with the pstree command. When a process dies, it is the responsibility of the process's parent to collect it's return code and resource usage information. When a parent dies before it's children, the orphaned children are inherited by the first process (usually /sbin/init).

    *

  • A Process's Life Cycle How Processes are Started In Linux (and Unix), unlike many other operating systems, process creation and command execution are two separate concepts. Though usually a new process is created so that it can run a specified command (such as the bash shell creating a process to run the chmod command), processes can be created without running a new command, and new commands can be executed without creating a new process. Creating a New Process (Forking) New processes are created through a technique called forking. When a process forks, it creates a duplicate of itself. Immediately after a fork, the newly created process (the child) is an almost exact duplicate of the original process (the parent). The child inherits an identical copy of the original process's memory, any open files of the parent, and identical copies of any parameters of the parent, such as the current working directory or umask. About the only difference between the parent and the child is the child's heritage information (the child has a different process ID and a different parent process ID, for starters), and (for the programmers in the audience) the return value of the fork() system call. As a quick aside for any programmers in the audience, a fork is usually implemented using a structure similar to the following.

    *

  • A Process's Life Cycle As a quick aside for any programmers in the audience, a fork is usually implemented using a structure similar to the following.

    When a process wants to create a new process, it calls the fork() system call (with no arguments). Though only one process enters the fork() call, two processes return from in. For the newly created process (the child), the return value is 0. For the original process (the parent), the return value is the process ID of the child. By branching on this value, the child may now go off to do whatever it was started to do (which often involves exec()ing, see next), and the parent can go on to do its own thing.

    *

  • A Process's Life Cycle Executing a New Command (Exec-ing) New commands are run through a technique called execing (short for executing). When execing a new command, the current process wipes and releases most of its resources, and loads a new set of instructions from the command specified in the filesystem. Execution starts with the entry point of the new program. After execing, the new command is still the same process. It has the same process ID, and many of the same parameters (such as its resource utilization, umask, current working directory, and others). It merely forgets its former command, and adopts the new one. Again for any programmers, execs are performed through one of several variants of the execve() system call, such as the execl() library call.

    The process enters the the execl(...) call, specifying the new command to run. If all goes well, the execl(...) call never returns. Instead, execution picks up at the entry point (i.e., main()) of the new program. If for some reason execl(...) does return, it must be an error (such as not being able to locate the command's executable in the filesystem).

    *

  • A Process's Life Cycle Combining the Two: Fork and Exec Some programs may fork without execing. Examples include networking daemons, who fork a new child to handle a specific client connection, while the parent goes back to listen for new clients. Other programs might exec without forking. Examples include the login command, which becomes the user's login shell after successfully confirming a user's password. Most often, and for shell's in particular, however, forking and execing go hand and hand. When running a command, the bash shell first forks a new bash shell. The child then execs the appropriate command, while the parent waits for the child to die, and then issues another prompt.

    *

  • The Lineage of Processes (and the pstree Command) Upon booting the system, one of the responsibilities of the Linux kernel is to start the first process (usually /sbin/init). All other processes are started because an already existing process forked. [2] Because every process except the first is created by forking, there exists a well defined lineage of parent child relationships among the processes. The first process started by the kernel starts off the family tree, which can be examined with the pstree command.

    *

  • How a Process DiesWhen a process dies, it either dies normally by electing to exit, or abnormally as the result of receiving a signal. We here discuss a normally exiting process, postponing a discussion of signals until a later Lesson. We have mentioned previously that processes leave behind a status code (also called return value) when they die, in the form of an integer. (Recall the bash shell, which uses the $? variable to store the return value of the previously run command.) When a process exits, all of its resources are freed, except the return code (and some resource utilization accounting information). It is the responsibility of the process's parent to collect this information, and free up the last remaining resources of the dead child. For example, when the bash shell forks and execs the chmod command, it is the parent bash shell's responsibility to collect the return value from the exited chmod command. Orphans If it is a parent's responsibility to clean up after their children, what happens if the parent dies before the child does? The child becomes an orphan. One of the special responsibilities of the first process started by the kernel is to "adopt" any orphans. (Notice that in the output of the pstree command, the first process has a disproportionately large number of children. Most of these were adopted as the orphans of other processes).

    *

  • How a Process DiesZombies In between the time when a process exits, freeing most of its resources, and the time when its parent collects its return value, freeing the rest of its resources, the child process is in a special state referred to as a Zombie. Every process passes through a transient zombie state. Usually, users need to be looking at just the right time (with the ps command, for example) to witness a zombie. They show up in the list of processes, but take up no memory, no CPU time, or any other system resources. They are just the shadow of a former process, waiting for their parent to come and finish them off. Negligent Parents and Long Lived Zombies Occasionally, parent processes can be negligent. They start child processes, but then never go back to clean up after them. When this happens (usually because of a programmer's error), the child can exit, enter the zombie state, and stay there. This is usually the case when users witness zombie processes using, for example, the ps command. Getting rid of zombies is perhaps the most misunderstood basic Linux (and Unix) concept. Many people will say that there is no way to get rid of them, except by rebooting the machine. Using the clues discussed in this section, can you figure out how to get rid of long lived zombies? You get rid of zombies by getting rid of the negligent parent. When the parent dies (or is killed), the now orphaned zombie gets adopted by the first process, which is almost always /sbin/init. /sbin/init is a very diligent parent, who always cleans up after its children (including adopted orphans).

    *

  • The 5 Process States The previous section discussed how processes are started, and how they die. While processes are alive they are always in one of five process states, which effect how and when they are allowed to have access to the CPU. The following lists each of the five states, along with the conventional letter that is used by the ps, top, and other commands to identify a process's current state. Runnable (R) Processes in the Runnable state are processes that, if given the opportunity to access the CPU, would take it. More formally, this is know as the Running state, but because only one process may be executing on the CPU at any given time, only one of these processes will actually be "running" at any given instance. Because runnable processes are switched in and out of the CPU so quickly, however, the Linux system gives the appearance that all of the processes are running simultaneously.

    *

  • The 5 Process States Voluntary (Interruptible) Sleep (S) As the name implies, a process which is in a voluntary sleep elected to be there. Usually, this is a process that has nothing to do until something interesting happens. A classic example is a networking daemon, such as the httpd process that implements a web server. In between requests from a client (web browser), the server has nothing to do, and elects to go to sleep. Another example would be the top command, which lists processes every five seconds. While it is waiting for five seconds to pass, it drops itself into a voluntary sleep. When something that the process in interested in happens (such as a web client makes a request, or a five second timer expires), the sleeping process is kicked back into the Runnable state. Involuntary (Non-interruptible) Sleep (D) Occasionally, two processes try to access the same system resource at the same time. For example, one process attempts to read from a block on a disk while that block is being written to because of another process. In these situations, the kernel forces the process into an involuntary sleep. The process did not elect to sleep, it would prefer to be runnable so it can get things done. When the resource is freed, the kernel will put the process back into the runnable state. Although processes are constantly dropping into and out of involuntary sleeps, they usually do not stay there long. As a result, users do not usually witness processes in an involuntary sleep except on busy systems.

    *

  • The 5 Process States Stopped (Suspended) Processes (T) Occasionally, users decide to suspend processes. Suspended processes will not perform any actions until they are restarted by the user. In the bash shell, the CTRL+Z key sequence can be used to suspend a process. In programming, debuggers often suspend the programs the are debugging when certain events happen (such as breakpoints occur). Zombie Processes (Z) As mentioned above, every dieing process goes through a transient zombie state. Occasionally, however, some get stuck there. Zombie processes have finished executing, and have freed all of their memory and almost all of their resources. Because they are not consuming any resources, they are little more than an annoyance that can show up in process listings.

    *

  • Viewing Process States When viewing the output of commands such as ps and top, process states are usually listed under the heading STAT. The process is identified by one of the following letters. Runnable - R Sleeping - S Stopped - T Uninterruptible sleep - D Zombie - Z *

  • ExamplesChapter2. Process States Identifying Process States

    *

  • QuestionsChapter2. Process States 1, 2, and 4*

  • QuestionsChapter3. Process Scheduling: nice and renice Key ConceptsA primary task of the Linux kernel is scheduling processes. Every process has a niceness value that influences its scheduling. The nice and renice commands can change a process's scheduling priority.

    *

  • Process Scheduling Nomenclature One of the fundamental tasks of the Linux kernel is ensure that processes share system resources effectively. One of the most fundamental resources which has to be shared is the CPU. How the kernel decides which process gets to execute on the CPU at which time is known as scheduling. Every process has two values which influence its scheduling. The first is a dynamic value which is constantly being changed by the kernel. The second is a fixed value, which is only occasionally (if ever) explicitly changed by a user. In the open source community, the nomenclature used to describe these two values has been inconsistent (at best), which leads to confusion. As much as possible, this text will try to be consistent with the ps and top command, and refer to the first (dynamic) value as the process's priority, and the second (fixed) value as the process's niceness.

    *

  • Process Scheduling, in EssenceRecently, much attention has been focused on methods used by the Linux kernel to implement scheduling, and the technique has varied from kernel release to kernel release. While the following discussion is not correct at a detailed level, it nevertheless conveys the essence of how the Linux kernel schedules processes. In order to more easily illustrate scheduling, maxwell will start four versions of the cat command, running in the background. (Processes can be run in the background by appending an ampersand (& ), as will be discussed in a later Lesson). The cat commands read from /dev/zero (a pseudo device that acts as an infinite source of binary zeros), and write to /dev/null (a pseudo device which throws away everything that is written to it).

    *

  • Process Scheduling, in EssenceHow long will these cat commands run? Forever. The user maxwell next monitors the processes on his machine using the top command.

    While watching the top command, maxwell observes that the values in the third column (labeled PRI) are constantly changing. These are the process's dynamic "priority" values mentioned above. The fourth column (labeled NI) is the fixed "niceness" value of the process. *

  • Process Priorities When scheduling processes, the kernel effectively gives every process a handful of counters. Every time a process gets scheduled onto the CPU, it gives up one of its counters. When deciding which process to schedule onto the CPU next, the kernel chooses the runnable process with the most counters. Eventually, the kernel will reach a state where all of the runnable processes have used up their counters. This is referred to as the end of a scheduling epoch, and at this point, the kernel starts all of the processes over again with a new handful of counters. Notice that processes which are not in the runnable state never give up their counters. If, however, a sleeping process were to suddenly awaken (because something interesting happened) and be kicked into the runnable state, it would most likely have more counters than processes which had been running for a while, and would be quickly scheduled onto the CPU. How does this relate to the values shown in the PRI column? Think of this column as a process's number of counters, subtracted from 40. Therefore, processes with a lower priority (as listed by the top command) have the scheduling advantage. In the output above, the cat commands, which are constantly in the runnable state, are consuming their counters. The init process, however, who is sleeping quietly in the background, is not.

    *

  • Process Niceness As mentioned above, every process also has a static value referred to as its niceness value. This value may range from -20 to 19 for any process, starting at 0 by default. How does a process's niceness influence its scheduling? At the beginning of a scheduling epoch, you can think of the kernel subtracting a process's niceness value from the number of counters the process is allocated. As a result, "nicer" processes (those with a higher niceness value) get less counters, and thus less time on the CPU, while "greedy" processes (those with a niceness value less than 0) get more counters and more time on the CPU. If a "nice" process is the only one running on the machine, however, it would get full access to the CPU. Changing a Process's NicenessSuppose maxwell were about to run a physics simulation that would take several days to complete. By increasing the process's niceness, the process would patiently wait if anyone else were running processes on the machine. If no one else were running processes, however, the physics simulation would have full access to the CPU. There are several techniques by which maxwell could alter his process's niceness value.

    *

  • Using nice to Start a low Priority Command The nice command is used to set a process's niceness as the process is started. When maxwell starts his simulation (which is an executable in his home directory named ~/simulation), he makes it as nice as possible, with a value of +19. (He also places the process in the background. Again, don't worry about that now. It will be discussed in a later Lesson.)

    Notice that the syntax can be misleading. The token -19 should not be considered negative 19, but instead the numeric command line switch 19. The user maxwell then again monitors processes using the top command. The first few processes listed are listed below. *

  • Using nice to Start a low Priority Command

    Next, maxwell gets rid of the cat commands (using techniques we will learn next Lesson). *

  • Using nice to Start a low Priority Command When he observes the top command again, his simulation, now (almost) the lone runnable process on the machine, is receiving almost all of the CPU's time.

    As an additional subtlety, the number specified is the number to be added to the current shell's niceness value. Since most shells run with a niceness of 0, this is seldom noticed. But if a shell were running with a niceness value of 10, the following command line would result in the simulation running with a niceness value of 15.

    *

  • Using renice to Alter a Running Process

    The renice command can be used to change the niceness of an already running process. Processes can be specified by process id, username, or group name, depending on which of the following command line switches are used. *

  • Using renice to Alter a Running Process

    Suppose maxwell had already started his simulation, without altering its niceness value.

    He decides to be more polite to other people who might be using the machine, and uses the renice command to bump up the process's niceness value. In the absence of any command line switches, the renice command expects a niceness value and a process ID as its two arguments. *

  • Using top to Renice a Process As mentioned in the previous unit, the top command uses the r key to renice a process. While monitoring processes with top, pressing r will open the following dialog above the list of processes.

    Making Processes Greedier What if maxwell were more malicious in intent, and wanted to make his simulation greedier instead of nicer? Fortunately for other users on the machine, normal users cannot lower the niceness of a process. This has two implications. Because processes start with a default niceness of 0, standard users cannot make "greedy" processes with negative niceness values. Once a process has been made nice, it cannot be made "normal" again by normal users. Suppose the administrator noticed that maxwell's simulation was taking up excessive amounts of CPU time. She could use the renice command as root to bump up maxwell's niceness, and maxwell could not restore it.

    *

  • Lab Exercise - Process Scheduling: nice and renice Run the following command in a terminal.

    In another terminal, use the renice command to change the niceness value of all processes owned by you to 5. (You might want to consider using the pgrep command in conjunction with the xargs command for this step.) After completing the last step, change the niceness value of the cat process (started in step 1) to 10. Use the nice command to start another cat command (again reading /dev/zero redirected to /dev/null) with a niceness value of 15. Grade your exercise with both instances of the cat command still running.

    *

  • Lab Exercise - Process Scheduling: nice and renice Deliverables A cat command running with a niceness value of 10. A cat command running with a niceness value of 15. All other processes running by you have a niceness value of 5.

    Cleaning Up When you are finished grading your exercise, you may stop all of your cat processes with the CTRL+C control sequence.

    *

  • Chapter3. Process Scheduling: nice and renice Questions

    1, 2 and 3*

  • Chapter4. Sending Signals Key ConceptsSignals are a low level form of inter-process communication, which arise from a variety of sources, including the kernel, the terminal, and other processes. Signals are distinguished by signal numbers, which have conventional symbolic names and uses. The symbolic names for signal numbers can be listed with the kill -l command. The kill command sends signals to other processes. Upon receiving a signal, a process may either ignore it, react in a kernel specified default manner, or implement a custom signal handler. Conventionally, signal number 15 (SIGTERM) is used to request the termination of a process. Signal number 9 (SIGKILL) terminates a process, and cannot be overridden. The pkill and killall commands can be used to deliver signals to processes specified by command name, or the user who owns them. Other utilities, such as top and the GNOME System Monitor can be used to deliver signals as well.

    *

  • Signals Linux (and Unix) uses signals to notify processes of abnormal events, and as a primitive mechanism of interprocess communication. Signals are sometimes referred to as software interrupts, in that they can interrupt the normal flow of execution of a process. The kernel uses signals to notify processes of abnormal behavior, such as if the process tries to divide a number by zero, or tries to access memory that does not belong to it. Processes can also send signals to other processes. For example, a bash shell could send a signal to an xclock process. The receiving process knows very little about the origins of the signal. It doesn't know if the signal originated from the kernel, or from another process; all it knows is that it received a signal. *

  • Signals There are, however, different flavors of signals. The different flavors have symbolic names, but are also identified by integers. The various integers, and the symbolic name they are mapped to, can be listed using the kill -l command, or examined in the signal(7) man page.

    Linux, like most versions of Unix, implements 32 "normal" signals. In Linux, signals numbered 32 through 63 (which are not standard among the various versions of Unix) are "real time" signals, and beyond the scope of this text. *

  • Why Are Signals Sent? There are a variety of reasons why signals might be sent to a process, as illustrated by the following examples. Hardware Exceptions The process asked the hardware to perform some erroneous operation. For example, the kernel will send a process a SIGFPE (signal number 8) if it performs a divide by 0. Software Conditions Processes may need to be notified of some abnormal software condition. For example, whenever a process dies, the kernel sends a SIGCHLD (signal number 17) to the process's parent. As another example, X graphical applications receive a SIGWINCH (signal number 28) whenever their window is resized, so that they can respond to the new geometry. Terminal Interrupts Various terminal control key sequences send signals to the bash shell's foreground process. For example, CTRL+C sends a SIGINT (signal number 2), while CTRL+Z sends a SIGTSTP (signal number 20). Other Processes Processes may elect to send any signals to any other process which is owned by the same user. The kill command is designed to do just this. *

  • Sending Signals: the kill Command The kill command is used to deliver custom signals to other processes. It expects to be called with a numeric or symbolic command line switch, which specifies which signal to send, and a process ID, which specifies which process should receive it. As an example, the following commands deliver a SIGCHLD (signal number 17) to the xclock process, process ID number 8060.

    When using the symbolic name to specify a signal, the SIG prefix (which all signals share) can either be included or omitted. *

  • Sending Signals: the kill Command The kill command is used to deliver custom signals to other processes. It expects to be called with a numeric or symbolic command line switch, which specifies which signal to send, and a process ID, which specifies which process should receive it. As an example, the following commands deliver a SIGCHLD (signal number 17) to the xclock process, process ID number 8060.

    When using the symbolic name to specify a signal, the SIG prefix (which all signals share) can either be included or omitted. *

  • Receiving Signals When a process receives a signal, it may take one of the following three actions. Implement a Kernel Default Signal Handler For each type of the signal, there is a default response which is implemented by the kernel. Each signal is mapped to one of the following behaviors. Terminate: The receiving process is killed. Ignore: The receiving process ignores the signal Core: The receiving process terminates, but first dumps an image of its memory into a file named core in the process's current working directory. The core file can be used by developers to help debug the program. This response if affectionately referred to as "puking" by many in the Unix community. Stop: Stop (suspend) the process. The signal(7) man page documents which behavior is mapped to which signal. Choose to Ignore the Signal Programmers may elect for their application to simply ignore specified signals. Choose to Implement a Custom Signal Handler Programmers may elect to implement their own behavior when a specified signal is received. The response of the program is completely determined by the programmer. Unless a program's documentation says otherwise, you can usually assume that a process will respond with the kernel implemented default behavior. Any other response should be documented. *

  • Using Signals to Terminate Processes Of the 32 signals used in Linux (and Unix), standard users in practice only (explicitly) make use of a few.

    Usually, standard users are using signals to terminate a process (thus the name of the kill command). By convention, if programmers want to implement custom behavior when shutting down (such as flushing important memory buffers to disk, etc.), they implement a custom signal handler for signal number 15 to perform the action. Signal number 9 is handled specially by the kernel, and cannot be overridden by a custom signal handler or ignored. It is reserved as a last resort, kernel level technique for killing a process. *

  • Using Signals to Terminate Processes As an example, einstein will start a cat command that would in principle run forever. He then tracks down the process ID of the command, and terminates it with a SIGTERM.

    SIGTERM (signal number 15) is the default signal for the kill command, so einstein could have used kill 8375 to the same effect. In the following, einstein repeats the sequence, this time sending a SIGKILL. *

  • Alternatives to the kill Command Using signals to control processes is such a common occurrence, alternatives to using the kill command abound. The following sections mention a few. The pkill Command In each of the previous examples, einstein needs to determine the process ID of a process before sending a signal to it with the kill command. The pkill command can be used to send signals to processes selected by more general means. The pkill command expects the following syntax. pkill [-signal] [SWITCHES] [PATTERN]The first token optionally specifies the signal number to send (by default, signal number 15). PATTERN is an extended regular expression that will be matched against command names. The following table lists commonly used command line switches. Processes that meet all of the specified criteria will be sent the specified signal.

    *

  • Alternatives to the kill Command Conveniently, the pkill command omits itself and the shell which started it when killing all processes owned by a particular user or terminal. Consider the following example.

    Notice that, although the bash shell qualifies as a process owned by the user maxwell, it survived the slaughter. *

  • The killall Command Similar to pkill, the killall command delivers signals to processes specified by command name. The killall command supports the following command line switches. *

  • The System Monitor The System Monitor GNOME application, introduced in a previous Lesson, can also be used to deliver signals to processes. By right clicking on a process, a pop-up menu allows the user to select End Process, which has the effect of delivering a SIGTERM to the process. What do you think the Kill Process menu selection does? The Kill Process menu selection delivers a SIGKILL signal to the process. *

  • The top Command The top command can , can also be used to deliver signals to processes. Using the K key, the following dialog occurs above the list of processes, allowing the user to specify which process ID should receive the signal, and which signal to deliver. *

  • Online ExercisesChapter4. Sending Signals Lab Exercise Objective: Effectively terminate running processes. Estimated Time: 10 mins. Specification Create a short shell script called ~/bin/kill_all_cats, and make it executable. When executed, the script should kill all currently running cat processes. In a terminal, start a cat process using the following command line. Leave the process running while grading your exercise (but don't be surprised if its not running when you're done). [student@station student]$ cat /dev/zero > /dev/null Deliverables A script shell script called ~/bin/kill_all_cats, which when executed, delivers a SIGTERM signal to all currently running instances of the cat command. An executing cat process.

    Hint ~/bin/kill_all_cats = killall cat*

  • Chapter5. Job Control Key ConceptsThe bash shell allows commands to be run in the background as "jobs". The bash shell allows one job to run in the foreground, and can have multiple backgrounded jobs. The jobs command will list all backgrounded jobs. The CTRL-Z key sequence will suspend and background the current foreground job. The bg command resumes a backgrounded job. The fg command brings a backgrounded job to the foreground.

    Discussion The topics addressed by this Workbook so far, namely listing process information, changing a process's niceness, and sending signals to processes, are features shared by all processes, whether they are started from a shell's command line or otherwise. Unlike these previous topics, our remaining topic, job control, concerns itself with managing processes which are started from an interactive shell prompt, and we will focus on the bash shell in particular.

    *

  • Running Commands in the Foreground When running a command from the bash shell prompt, unless you specify otherwise, the command runs in the foreground. The bash shell waits for the foreground command to terminate before issuing another prompt, and anything typed at the keyboard is generally read as stdin to this command. All of this should sound familiar, as almost every command used thus far has been run in the foreground. Running Commands in the Background as Jobs In contrast, any command you specify can also be run in the background by appending the ampersand character (&) to the command line. Generally, only long running commands that do not require input from the keyboard, and do not generate large amounts of output, are appropriate for backgrounding. When the bash shell backgrounds a command, the command is referred to as a job, and assigned a job number.

    *

  • Running Commands in the Foreground In the following example, einstein is performing a search of his entire filesystem for files which are larger than 1 megabyte in size. Because he expects this command to run a while, he redirects stdout to a file, throws stderr away, and runs it as a background job.

    After starting the job in the background, the bash shell reports two pieces of information back to einstein. The first is the job number, reported in square brackets. The second is the process ID of the backgrounded job. In this case, the job is job number 1, and the process ID of the find command is 7022. While this command is running in the background, einstein decides he would also like to find all files owned by him which he has not modified in two week. He composes the appropriate find command, and again backgrounds the job.

    *

  • Running Commands in the Foreground Again, bash reports the job number (2) and the process ID of the second find command (7023). The second message from the bash shell is notifying einstein that job number one has finished. The bash shell reports that it has exited with a return code of 1 (as opposed to being killed by a signal), and redisplays the command line to remind einstein of what he had run. The bash shell does not report immediately when jobs die, but waits until the next time it interprets a command line. By the time einstein has digested all of this, he suspects his second job has finished as well. He simply hits the RETURN key (so that bash will "interpret" the empty command line). The bash shell similarly reports his now finished job number 2.

    *

  • Managing Multiple JobsThe user einstein, like the user maxwell, is often performing physics calculations that take a long time to execute. He starts several different versions of the simulation, backgrounding each. *

  • Listing Current Jobs with jobs The user einstein can use the jobs builtin command to report all of his currently running jobs.

    Each of his background jobs are listed, along with the job number. The most recently handled job is referred to as the current job, and is decorated by the jobs command with a +. Bringing a Job to the Foreground with fg A backgrounded job can be brought back to the foreground with the fg builtin command. The fg command expects a job number as an argument, or if none is supplied, will foreground the current job.

    The job sim_c is now running in the foreground. As a consequence, the shell will not issues another prompt while the process is still running.

    *

  • Listing Current Jobs with jobs Suspending the Foreground Job with CTRLZ We had previously introduced the CTRL+Z control sequence as a method of suspending processes. Now, by watching the output of the bash shell closely as einstein suspends the foreground command, we see that the bash shell treats any suspended foreground process as a job.

    When suspended (or, to use the shell's terminology, stopped), the process is assigned a job number (if it did not already have one) and backgrounded. The jobs command reports the job as a "Stopped" job, and the ps command confirms that the process is in the stopped state. *

  • Listing Current Jobs with jobs Restarting a Stopped Job in the Background A stopped job can be restarted in the background with the bg builtin command. Like the fg command, the bg command expects a job number as an argument, or, if none is provided, uses the current job. In the following, einstein restarts his stopped job in the background.

    Now job number 3 is again in the running state.

    *

  • Killing Jobs The kill command, which is used to deliver signals to processes, is implemented as a shell builtin command. (Confusingly, another version is also found in the filesystem, /bin/kill. You are probably using the shell builtin version instead). As a result, it is aware of any jobs that the shell is managing. When specifying which process should receive a signal, the process's job number (if it has one) can be specified in lieu of its process ID. To distinguish the two, job numbers are preceded by a percent character (%), as in the following example.

    *

  • Summary*

  • Online Exercises Chapter5. Job Control Online Exercises Lab Exercise Objective: Use bash job control to manage multiple tasks. Estimated Time: 10 mins. Specification Start the following four commands, placing each in the background.

    Using job control commands and common control sequences, stop (suspend) the ls and find jobs. Deliverables Four background jobs managed by the bash shell. The cat and sleep jobs should jobs should be running, while the find and ls jobs should be suspended. Clean Up After you have graded your exercise, use the kill command (or the fg/ CTRL+C combination) to kill all four jobs.

    *

  • Chapter6. Scheduling Delayed Tasks: at Key ConceptsThe at command can submit commands to run at a later time. The batch command can submit commands to run when the machines load is low. Commands can either be entered directly, or submitted as a script. stdout from at jobs is mailed to the user. atq and atrm are used to examine and remove currently scheduled jobs.

    *

  • Chapter6. Scheduling Delayed Tasks: at Before discussing the at command directly, we begin with a short discussion of a common Unix concept: daemons. With a name inspired by the physicist Maxwell's Daemon, Unix daemons are processes that run in the background, detached from any terminal, performing tasks that are usually not related to a user at the keyboard. Daemons are often associated with network services, such as the web server (httpd) or the FTP server (vsftpd). Other daemons handle system tasks, such as the logging daemon (syslogd) and the power management daemon (apmd). This Lesson, and the following Lesson, discuss two daemons that allow users to delay tasks (atd), or run commands at fixed intervals (crond). By now, you have probably noticed a naming convention as well: programs meant to be run as daemons usually end in the letter d. Daemons are processes like any other process. They are usually started as part of the system's boot up sequence, or by the administrative user root, so unless you look for them, you might never know that they are there.

    *

  • Chapter6. Scheduling Delayed Tasks: at Daemons are processes like any other process. They are usually started as part of the system's boot up sequence, or by the administrative user root, so unless you look for them, you might never know that they are there.

    Some daemons run as the user root, while others take on the identity of another system user for security concerns. Above, the crond daemon is running as root, but the atd daemon is running as the user daemon.

    *

  • The atd Daemon

    The atd daemon allows users to submit jobs to be performed at a later time, such as "at 2:00am". In order to use the atd daemon, it must be running. Users can confirm that atd is running simply by examining a list of running processes:

    Notice that the seventh column specifies what terminal a process is associated with. For blondie's grep command, the terminal is pts/2, which probably refers to a network shell or a graphical terminal within an X session. Notice that the atd daemon has no associated terminal. One of the defining characteristics of a daemon is that it drops it's association with the terminal that started it. *

  • Submitting Jobs with at The at command is used to submit jobs to the atd daemon to be run at a specific time. The commands to be run are either submitted as a script (with the -f command line switch), or entered directly via stdin. Standard out from the command is mailed to the user.

    The time of day can be specified using HH:MM, suffixed by "am" or "pm". The terms "midnight", "noon", and "teatime" can also be used. (You read correctly, "teatime".) A date can also be specified using several formats, including MM/DD/YY. The at(1) man page provides many more details. *

  • Submitting Jobs with at The wrestler hogan would like to print a file containing all of the fan mail that he has received, fanmail.txt. He's a little concerned, though, because he shares the printer with ventura, who uses the printer a lot as well. Wanting to avoid a fight, hogan decides to delay his printing until 2:00 in the morning.

    Because hogan did not use the -f command line switch, the at command prompted hogan to type in his commands using stdin (the keyboard). Fortunately, hogan knows that CTRL+D, when entered directly from a terminal, indicates an "end of file". Alternately, he could have piped the command into stdin directly:*

  • Submitting Jobs with at Next, hogan confirms that his job has been registered using atq.

    Lastly, hogan remembers that ventura is on vacation, so he can print his fan mail without incident. He decides to cancel his at job, and print the file directly. *

  • Delaying Tasks with batch The batch command, like the at command, is used to defer tasks until a later time. Unlike the at command, batch does not run the command at a specific time, but instead waits until the system is not busy with other tasks, whenever that time might be. If the machine is not busy when the job is submitted, the job might run immediately. The atd daemon monitors the system's loadavg, and waits for it to drop beneath 0.8 before running the job. The batch command has a syntax identical to the at command, where jobs can either be specified using stdin, or submitted as a batch file with the -f command line switch. If a time is specified, batch will delay observing the machine until the specified time. At that time, batch will begin monitoring the system's loadavg, and run the job when the system is not otherwise busy.

    *

  • Summary of at Commands *

  • Chapter6. Scheduling Delayed Tasks: at Submitting a job for Delayed Execution Objective: Use the atd service to delay a task for later execution Estimated Time: 10 mins.

    Specification You have had a hard time remembering what day it is, so you would like to mail yourself a copy of the current calendar, so that you see it the first thing in the morning. Submit an at job, that simply runs the cal command, for 3:45 in the morning. Make sure that it is your only job scheduled with the at facility. Deliverables A queued at job, which will generate the output of the cal command at 3:45 in the morning.

    *

  • Chapter7. Scheduling Periodic Tasks: cronKey ConceptsThe cron facility is used to schedule regularly recurring tasks. The crontab command provides a front end to editing crontab files. The crontab file uses 5 fields to specify timing information. stdout from cron jobs is mailed to the user.

    *

  • Chapter7. Scheduling Periodic Tasks: cronPerforming Periodic Tasks Often, people find that they are (or aught to be) performing tasks on a regular basis. In system administration, such tasks might include removing old, unused files from the /tmp directory, or checking to make sure a file that's collecting log messages hasn't grown to large. Other users might find they're own tasks, such as checking for large files that they aren't using anymore, or checking a website to see if anything new has been posted. The cron service allows users to configure commands to be run on a regular basis, such as every 10 minutes, once every thursday, or twice a month. Users specify what commands should be run at what times by using the crontab command to configure their "cron table". The tasks are managed by a traditional Linux (and Unix) daemon, the crond daemon.

    *

  • Chapter7. Scheduling Periodic Tasks: cronThe cron Service The crond daemon is the daemon that performs periodic tasks on behalf of the system or individual users. Usually, the daemon is started as the system boots, so most users can take it for granted. By listing all processes and searching for crond, you can confirm that the crond daemon is running.

    If the crond daemon is not running, your system administrator would need to start the crond service as root.

    *

  • crontab Syntax Users specify which jobs to run, and when to run them, by configuring a file known as the "cron table", more often abbreviated "crontab". An example crontab file is listed below.

    A crontab file is a line based configuration file, with each line performing one of three functions: Comments All lines who first (non-space) character is a # are considered comments, and are ignored. Environment variables All lines that have the form name = value are used to define environment variables. Cron commands Any other (non blank) line is considered a cron command, which is made up of six fields described below.

    *

  • crontab Syntax Cron command lines consist of six whitespace separated fields. The first 5 fields are used to specify when to run the command, and the remaining sixth field (composed of everything after the fifth field) specifies the command to run. The first five fields specify the following information:*

  • crontab Syntax Each of the first five fields must be filled with a token using the following syntax: *

  • Using the crontab Command Users seldom manage their crontab file directly (or even know where it is stored), but instead use the crontab command to edit, list, or remove it. crontab {[-e] | [-l] | [-r]}crontab FILE Edit, list, or remove the current crontab file, or replace the current crontab file with FILE.

    *

  • Using the crontab Command In the following sequence of commands, hogan will use the crontab command to manage his crontab configuration. He first lists his current crontab configuration to the screen, then he lists the current file again, storing the output into the file mycopy. *

  • Using the crontab Command Next, hogan removes his current crontab configuration. When he next tries to list the configuration, he is informed that no current configuration exists.

    In order to restore his cron configuration, hogan uses the crontab command once again, this time specifying the mycopy file as an argument. Upon listing his configuration again, he finds that his current configuration was read from the mycopy file.

    *

  • Using the crontab Command A little annoyingly, the banner has been duplicated in the process. Can you out why? The original banner was stored in mycopy. When mycopy was resubmitted, cron treated the original banner as a user comment, and prepended a new banner.

    *

  • Editing crontab Files in Place Often, users edit their crontab files in place, using crontab -e. The crontab command will open the current crontab configuration into the user's default editor. When the user has finished editing the file, and exits the editor, the modified contents of the file are installed as the new crontab configuration. The default default editor is /bin/vi, however crontab, like many other commands, examines the EDITOR environment variable. If the variable has been set, it will be used to specify which editor to open. For example, if hogan prefers to use the nano editor, he can first set up the EDITOR environment variable to /usr/bin/nano (or simply nano), and then run crontab -e.

    *

  • Editing crontab Files in Place If hogan wanted to use nano as his editor, he could use one of the following approaches:

    or, even better, hogan could add the line "export EDITOR=nano" to his .bash_profile file, and the environment variable would be set automatically every time he logged in. In summary, there are two ways someone could go about creating or modifying their crontab configuration. Create a text file containing their desired configuration, and then install it with crontab FILENAME. Edit their configuration in place with crontab -e.

    *

  • Where does the output go?How does the user receive output from commands run by cron? The crond daemon will mail stdout and stderr from any commands run to the local user. Suppose ventura had set up the following cron job:

    *

  • Where does the output go?Once an hour, at five minutes past the hour, he could expect to receive new mail that looks like the following:

    The mail message contains the output of the command in the body, and all defined environment variables in the message headers. Optionally, ventura could have set the special MAILTO environment variable to a destination email address, and mail would be sent to that address instead: *

  • Environment Variables and cron When configuring cron jobs, users should be aware of a subtle detail. When the crond daemon starts the user's command, it does not run the command from a shell, but instead forks and execs the command directly. This has an important implication: Any environment variables or aliases that are configured by the shell at startup, such as any defined in /etc/profile or ~/.bash_profile, will not be available when cron executes the command. If a user wants an environment variable to be defined, they need to explicitly define the variable in their crontab configuration. *

  • Online Exercises Chapter7. Scheduling Periodic Tasks: cron Monitoring Who is on the System. Online ExerciseObjective: Configure a cron job Estimated Time: 10 mins. Specification You are a little paranoid, and want to monitor who is using your computer in the middle of the night. Configure a cron job which will mail you the output of the who command daily at 4:35 in the morning. Deliverables A cron configuration which mails the output of the who command daily at 4:35 am.

    *

Recommended

View more >