Inter-process Communication and Process Synchronization
Inter-process
Communication and Process Synchronization:
Inter-process
communication is a set of programming interfaces that allow a programmer
to coordinate activities among different program processes that can run
concurrently in an operating system. There are applications that involve each
executing multiple processes concurrently. Processes work together to perform
application-specific tasks. They are referred to as cooperating processes.
Cooperating processes are “loosely” connected in the sense that they have
independent private address spaces and run at different speeds. The relative
speeds of the processes are not normally known. From time to time, they
interact among themselves by exchanging information. An exchange of information
among processes is called an interposes communication.
It is stated in
that one process cannot access elements from the private address space of
another. Such access violations cause address exceptions, and the operating
system traps them. This implies that processes need assistance from the
operating system to set up communication facilities among themselves. Most
operating systems implement a few different communication schemes to facilitate
interposes communications. For each scheme, they support a few communication
primitives (interface operations). Processes execute these primitives to
exchange information among themselves.
Ultimately, all
information exchanges involve exchanging data items among the processes. The
procedures of data items are called senders, and the consumers the receivers.
Every data item, sent by a sender, is copied from the sender process address
space to the kernel space from where a receiver process copies the data item
into its own address space. Data items sent by senders remain in the kernel
space until consumed by receivers.
Figure:
Inter-process communication via the kernel space
The above-referred inter-process communication scheme is indirect as
all inter-process communications are done via the kernel space. Every send or
receive operation is performed by making a system call. Most operating systems
also support a mechanism that allows processes to communicate among themselves
directly without copying data via the kernel space. To do this a “shared memory
region” is set up among the cooperating processes. The operating system maps
parts of the process address spaces to the same physical memory locations so
that all processes having access to the shared memory region immediately
observe changes that may be made in that region. Note that shared memory
regions reside in the user space, and not in the kernel space.
Figure:
Inter-process communication through address space sharing
In indirect communication schemes, processes make system
calls for each inter-process communication (send or receive of a data item). By
contrast, in shared memory-based direct communication schemes, processes
directly communicate by reading and modifying values in the common physical
memory locations. As these memory locations are mapped to the user space, the
processes do not need to make system calls to access contents of these
locations. Nonetheless, they create, attach, and destroy shared memory regions
through system calls. Every inter-process
communication scheme (directly or indirectly) can be modeled as schematics
shown in. Ultimately, all communications involve storing and retrieving
information in some shared medium. The senders store information in the medium,
and the receivers retrieve the information from it. For easier manipulation,
the information is structured into various data objects called shared data or
shared variables. Shared variables are units of information referenced by
processes. Each shared variable has a unique name / address and a type. The
type defines a finite domain of values. Interface operations, and consistency
semantics. The variable can store any value from the domain. Interface
operations are the only means to access the shared variable. The semantics of
each operation describe the permitted behavior of executions of the operation.
Process communicates among themselves by manipulating values of shared
variables by executing operations supported by these variables. Inter-process
communication schemes vary and use shared variable of different types.
Figure: A typical inter-process communication facility
Each process executes its own operations on shared
variables sequentially, as specified by its own program. Nevertheless,
different processes may execute their operations on the same shared variable
concurrently. That is, operation executions of different processes may overlap,
and may affect one another. Note that each operation, when executed indivisibly
on a shared variable, transforms the variable from one consistent value to
another. However, when the operations are executed concurrently on a shared variable
(without any access control), the consistency of its values may not be
guaranteed. The behavior of operation executions on shared variables must be
predictable for effective inter-process communications. Thus, operation
executions on shared variables may need to be coordinated to ensure their
consistency semantics. Coordination of accesses to shared variables (or shared
space) is called synchronization. Most operating systems implement a few
different synchronization schemes for process coordination purposes. Each
scheme supports a set of primitives. The primitives are used when it is
absolutely necessary to order executions of operations on shared variables in a
particular manner.
Comments
Post a Comment
Thank you for message