Cooperation: Interprocess Communication

Concurrent Processing

Modern operating systems, such as Unix, execute processes concurrently: Although there is a single Central Proccessor (CPU), which execute the instructions of only one program at a time, the operating system rapidly switches the processor between different processes (usually allowing a single process a few hundred microseconds of CPU time before replacing it with another process.) Lets take a moment to see how the operating system manages this. A computer consists of resources that a program can use to accomplish a task. Here are several examples:

Some of these resources (such as memory) are simultaneously shared by all processes. Such resources are being used in parallel between all running processes on the system. Other resources must be used by one process at a time, so must be carefully managed so that all processes get access to the resource. Such resources are being used in concurrently between all running processes on the system. The most important example of a shared resource is the CPU, although most of the I/O devices are also shared. For many of these shared resources the operating system distributes the time a process requires of the resource to ensure reasonable access for all processes. Consider the CPU: the operating system has a clock which sets an alarm every few hundred microseconds. At this time the operating system stops the CPU, saves all the relevant information that is needed to re-start the CPU exactly where it last left off (this will include saving the current instruction being executed, the state of the memory in the CPUs registers, and other data), and removes the process from the use of the CPU. The operating system then selects another process to run, returns the state of the CPU to what it was when it last ran this new process, and starts the CPU again.

Cooperation

Concurrent processes executing in the operating system allows for the processes to cooperate (both mutually or destructively) with other processes. Processes are cooperating if they can affect each other. The simplest example of how this can happen is where two processes are using the same file. One process may be writing to a file, while another process is reading from the file; so, what is being read may be affected by what is being written. Processes cooperate by sharing data. Cooperation is important for several reasons:

Cooperation between processes requires mechanisms that allow processes to communicate data between each other and synchronize their actions so they do not harmfully interfere with each other. The purpose of this note is to consider ways that processes can communicate data with each other, called Interprocess Communcation (IPC). Another note will discuss process synchronization, and in particular, the most important means of synchronizing activity, the use of semaphores.

The Producer/Consumer Paradigm of Cooperation

An important paradigm, common to many examples of cooperation, is the Producer/Consumer model of Cooperation. A producer process produces data which may be shared with other processes. A consumer process consumes the data produced (which may or may not change the data.) For example, the Unix pipe creates an example of cooperation that follows the Producer/Consumer paradigm:

       cat my_header.h   |  sed 's/STDOUT/stdout/g' > my_header.h

Here the producer process, cat produces data by writing-out the file my_header.h which is piped to the consumer process sed which edits the data. The is an example of mutual cooperation, and uses a shared resource to communicate data: the pipe.

A common problem among Producers/Consumers is synchronizing their actions so that they do not destructively interfere with each other. Lets consider two examples of how this problem can arise. A simple example is the Reader/Writer Problem: the producer is a writer which writes data to a file, while the consumer is a reader which reads the data written. If two readers are reading a file, there is no problems with interference; however, problems arise when a writer and reader, or two writers, access the file simultaneously. How can we synchronize readers and writers to ensure cooperation which is not destructive? A second example is the Bounded Buffer Problem: We have a buffer (a storage facility) of data which the producer produces by filling and the consumer consumes by emptying. When the buffer is empty the consumer must wait; when the buffer is full the producer must wait. In this example, producers must cooperate with each other, consumers must cooperate with each other, and producers and consumers must mutually cooperate. As you might expect the problem of synchronization is more complicated, and access to the buffer must be more restrictive than access to a file by writers/readers. The Bonus Problem of Lab5 is an example of the Bounded Buffer Problem where the buffer is represented by a counter which all processes have access.

Another important paradigm of cooperation is the Client/Server Model of Cooperation. In this paradigm, the server announces it has services it can provide (often this is data it makes available, but it may be some task it can perform) and the client requests some service. We have seen one example in lab5: A server announces it can capitalize any text, and clients provide the text to be capitalized. The data is exchanged through message queues. A more interesting example is when the client and server reside on different machines, and the communication channel is a network socket. This will be our next topic for the course.

Interprocess Communication (IPC) Facilities

A computer provides many resources which allow processes to communicate with each other. These resources are Interprocess Communication (IPC) Facilities. The simplest, is through the file system on disk. This is a resource that all processes in the system share, and with appropriate file permissions, two otherwise unrelated processes can exchange data. There are three problems with using files to communicate:

  1. Processes can only communicate if they share the same file system. A network such as the internet opens-up possible connections between processes. The network socket is the IPC facility provided by the operating system to allow access for processes through a network.
  2. Because files reside on disk, access to them is VERY slow, so using files as a means of IPC is very slow and inefficient.
  3. Because files are so easily accessed, they make it very difficult to synchronize processes to ensure that their cooperation is not destructive. The Reader/Writer Problem frequently arises through the unintentional activities of processes sharing files.
It is because of the second and third problem that the Kernel provides alternative methods of IPC.

Consider the second problem above: processes share the disk (at least when given permission by files owner), but the disk is very slow to access. Why can't files share Main Memory instead? They can, but much greater care is needed. Main memory is the primary source of the code and data of a process which is executed, so it is crucial that the operating system carefully control access to the memory resources of each process. The way the kernel does this, is by constructing virtual memory which blinds a process to all but the memory that is allocated to it. This powerful protection makes it difficult to share memory between processes: processes only see their own virtual address space. The operating system must step in if processes are to share data.

An alternative to sharing disk memory, is to share main memory; but to do this requires careful control by the kernel. Each process which wants to use a shared memory segment, must first request that they be attached to that segment, then the kernel will provide them with the virtual address for that segment. In this way multiple processes can share the same piece of main memory, although each process has a different virtual address for the memory segment. Any change made to the shared memory segment by one process is seen by all processes. The advantage of sharing the same main memory segment is that reading and writing data is much easier and much quicker, then exchanging data through a file. Still, the problem of synchronization between processes to ensure mutual cooperation is a problem.

There are other ways of IPC the kernel provides which can solve the synchronization problem. These forms of IPC use the Operating System to enforce synchronization. The Operating System can provide its own memory to act as a buffer between processes: this is what it does with Message Queues. A message queue is a special buffer that the Operating System maintains with its own memory. It allows for exchange of data, and synchronizes the reading and writing of data to ensure mutual cooperation. The draw back is that message queues are very limited in size, and they may be too restrictive to suit the communication needs of processes. An important alternative, which is really at the heart of Unix, are Pipes. Pipes were implemented in Unix from the very beginning, and allow large quanitites of data to pass between processes. Pipes can be unnamed, allowing only related processes to use them, or named, allowing unrelated processes to use them. The Operating System allows only one process to read from, and write to a pipe at a time. This provides careful synchronization of processes. Still, pipes are a resource with limitations:

Unix does provide Full-Duplex pipes, called stream pipes, which allow two-way communication on the same channel (in this way, they are like a telephone.)

The important point to take away from this, is that communication between processes can be a complicated business. There are many kinds of communication, and some IPC facilities are better suited than others. The more the Operating System is involved, the more synchronized the activity of communicating can be, but also the more restrictive the means of communication. The two most versatile facilities for communication involve sharing memory: files on disk and shared memory; and by their very versatility they also require some means for ensuring synchronization between cooperating processes. Semaphores are a primary means for doing this, and this will be the subject of another tutorial.