Terms: Mutex vs. Critical Section

Critical section provides synchronization which means for one process only, while mutexes allow data synchronization across processes.

critical section

In computer programming a critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. A critical section will usually terminate in fixed time, and a thread, task or process will only have to wait a fixed time to enter it. Some synchronization mechanism is required at the entry and exit of the critical section to ensure exclusive use, for example a semaphore.

By carefully controlling which variables are modified inside and outside the critical section (usually, by accessing important state only from within), concurrent access to that state is prevented. A critical section is typically used when a multithreaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure a shared resource, for example a printer, can only be accessed by one process at a time.

How critical sections are implemented varies among operating systems.

The simplest method is to prevent any change of processor control inside the critical section. On uni-processor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause a context switch while inside the section and restoring interrupts to their previous state on exit. Any thread of execution entering any critical section anywhere in the system will, with this implementation, prevent any other thread, including an interrupt, from getting the CPU and therefore from entering any other critical section or, indeed, any code whatsoever, until the original thread leaves its critical section.

This brute-force approach can be improved upon by using semaphores. To enter a critical section, a thread must obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores.

Some confusion exists in the literature about the relationship between different critical sections in the same program. In general, a resource that must be protected from concurrent access may be accessed by several pieces of code. Each piece must be guarded by a common semaphore. Is each piece now a critical section or are all the pieces guarded by the same semaphore in aggregate a single critical section? This confusion is evident in definitions of a critical section such as "… a piece of code that can only be executed by one process or thread at a time". This only works if all access to a protected resource is contained in one "piece of code", which requires either the definition of a piece of code or the code itself to be somewhat contrived.

Application Level Critical Sections

Application-level critical sections reside in the memory range of the process and are usually modifiable by the process itself. This is called a user-space object because the program run by the user (as opposed to the kernel) can modify and interact with the object. However the functions called may jump to kernel-space code to register the user-space object with the kernel.

Example Code For Critical Sections with Win32 API

/* Sample C/C++, Win9x/NT/ME/2000/XP, link to kernel32.dll */
#include <windows.h>
CRITICAL_SECTION cs; /* This is the critical section object -- once initialized, it cannot
                        be moved in memory */

/* Initialize the critical section -- This must be done before locking */
InitializeCriticalSection(&cs);

/* Enter the critical section -- other threads are locked out */
EnterCriticalSection(&cs);

/* Do some thread-safe processing! */

/* Leave the critical section -- other threads can now EnterCriticalSection() */
LeaveCriticalSection(&cs);

/* Release system object when all finished -- usually at the end of the cleanup code */
DeleteCriticalSection(&cs);

Note that on Windows NT (not 9x/ME), you can use the function TryEnterCriticalSection() to attempt to enter the critical section. This function returns immediately so that the thread can do other things if it fails to enter the critical section (usually due to the fact that another thread has locked it). Note that the use of a CriticalSection is not the same as a Win32 Mutex, which is an object used for inter-process synchronization. A Win32 CriticalSection is for inter-thread synchronization (and is much faster as far as lock times), however it cannot be shared across processes.

ǖ== Kernel Level Critical Sections ==

Typically, critical sections prevent process and thread migration between processors and the preemption of processes and threads by interrupts and other processes and threads.

Critical sections often allow nesting. Nesting allows multiple critical sections to be entered and exited at little cost.

If the scheduler interrupts the current process or thread in a critical section, the scheduler will either allow the process or thread to run to completion of the critical section, or it will schedule the process or thread for another complete quantum. The scheduler will not migrate the process or thread to another processor, and it will not schedule another process or thread to run while the current process or thread is in a critical section.

Similarly, if an interrupt occurs in a critical section, the interrupt’s information is recorded for future processing, and execution is returned to the process or thread in the critical section. Once the critical section is exited, and in some cases the scheduled quantum completes, these pending interrupt will be executed.

Since critical sections may execute only on the processor on which they are entered, synchronization is only required within the executing processor. This allows critical sections to be entered and exited at almost zero cost. No interprocessor synchronization is required, only instruction stream synchronization. Most processors provide the required amount of synchronization by the simple act of interrupting the current execution state. This allows critical sections in most cases to be nothing more than a per processor count of critical sections entered.

Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the scheduler to run at the exit of all critical sections. Further more, pending interrupts may be transferred to other processors for execution.

Critical sections should not be used as a long lived locking primitive. They should be short enough that the critical section will be entered, executed, and exited without any interrupts occurring, from neither hardware much less the scheduler.

 

semaphore (programming)

A semaphore is a protected variable (or abstract data type) and constitutes the classic method for restricting access to equivalent shared resources (e.g. storage) in a multiprogramming environment. They were invented by Edsger Dijkstra and first used in the THE operating system.

The value of the semaphore is initialized to the number of equivalent shared resources it is implemented to control. In the special case where there is a single equivalent shared resource, the semaphore is called a binary semaphore. The general case semaphore is often called a counting semaphore.

Semaphores are the classic solution to the dining philosophers problem, although they do not prevent all deadlocks.

Introduction

Semaphores can only be accessed using the following operations:

P(Semaphore s)
{
  await s > 0, then s := s-1; /* must be atomic once s > 0 is detected */
}

V(Semaphore s)
{
  s := s+1;   /* must be atomic */
}

Init(Semaphore s, Integer v)
{
  s := v;
}

Notice that incrementing the variable s must not be interrupted, and the P operation must not be interrupted after s is found to be nonzero. This can be done by special instruction (if the architecture’s instruction set supports it) or by ignoring interrupts in order to prevent other processes from becoming active.

The canonical names P and V come from the initials of Dutch words. V stands for verhoog, or "increase." Several explanations have been given for P (including passeer "pass," probeer "try," and pakken "grab"), but in fact Dijkstra wrote that he intended P to stand for the made-up portmanteau word prolaag,[1] short for probeer te verlagen, or "try-and-decrease."[2][3] (A less ambiguous English translation would be "try-to-decrease.") This confusion stems from the unfortunate characteristic of the Dutch language that the words for increase and decrease both begin with the letter V, and the words spelled out in full would be impossibly confusing for non–Dutch-speakers.

The value of a semaphore is the number of units of the resource which are free. (If there is only one resource, a "binary semaphore" with values 0 or 1 is used.) The P operation busy-waits (or maybe sleeps) until a resource is available, whereupon it immediately claims one. V is the inverse; it simply makes a resource available again after the process has finished using it. Init is only used to initialize the semaphore before any requests are made. The P and V operations must be atomic, which means that no process may ever be preempted in the middle of one of those operations to run another operation on the same semaphore.

In English textbooks, and in the programming language ALGOL 68, the P and V operations are sometimes called, respectively, down and up. In software engineering practice they are called wait and signal, or take and release, or pend and post.

To avoid busy-waiting, a semaphore may have an associated queue of processes (usually a FIFO). If a process performs a P operation on a semaphore which has the value zero, the process is added to the semaphore’s queue. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution.

Semaphores today as used by programmers

Semaphores remain in common use in programming languages that do not intrinsically support other forms of synchronization. They are the primitive synchronization mechanism in many operating systems. The trend in programming language development, though, is towards more structured forms of synchronization, such as monitors and channels. In addition to their inadequacies in dealing with deadlocks, semaphores do not protect the programmer from the easy mistakes of taking a semaphore that is already held by the same process, and forgetting to release a semaphore that has been taken.

Example usage

Since semaphores can have a count associated with them, they are usually made use of when multiple threads cooperatively need to achieve an objective. Consider this example:

We have a thread A that needs information from two databases, before it can proceed. Access to these databases is controlled by two separate threads B, C. These two threads have a message-processing loop; anybody needing their use posts a message into their message queue. Thread A initializes a semaphore S with init(S,-1). A then posts a data request, including a pointer to the semaphore S, to both B and C. Then A calls P(S), which blocks. The other two threads meanwhile take their time obtaining the information; when each thread finishes obtaining the information, it calls V(S) on the passed semaphore. Only after both threads have completed will the semaphore’s value be positive and A be able to continue. A semaphore used in this way is called a "counting semaphore."

Apart from a counting semaphore we also have a "blocking semaphore." A blocking semaphore is a semaphore that is initialized to zero. This has the effect that any thread that does a P(S) will block until another thread does a V(S). This kind of construct is very useful when the order of execution among threads needs to be controlled.

The simplest kind of semaphore is the "binary semaphore," used to control access to a single resource. It is essentially the same as a mutex. It is always initialized with the value 1. When the resource is in use, the accessing thread calls P(S) to decrease this value to 0, and restores it to 1 with the V operation when the resource is ready to be freed.

mutual exclusion

Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of un-shareable resources by pieces of computer code called critical sections.

Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The problem is acute because a thread can be stopped or started at any time.

To illustrate: suppose a section of code is mutating a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These critical sections of code accessing shared data must therefore be protected, so that other processes which read from or write to the chunk of data are excluded from running.

A mutex is also a common name for a program object that negotiates mutual exclusion among threads, also called a lock.

Introduction

On a uniprocessor system the common way to achieve mutual exclusion is to disable interrupts for the smallest possible number of instructions that will prevent corruption of the shared data structure, the so-called "critical region". This prevents interrupt code from running in the critical region. Beside this hardware supported solution, some software solutions exist that use "busy-wait" to achieve the goal. Examples of these algorithms include:

In a computer in which several processors share memory, an indivisible test-and-set of a flag is used in a tight loop to wait until the other processor clears the flag. The test-and-set performs both operations without releasing the memory bus to another processor. When the code leaves the critical region, it clears the flag. This is called a "spinlock" or "busy-wait."

Some computers have similar indivisible multiple-operation instructions for manipulating the linked lists used for event queues and other data structures commonly used in operating systems.

Most classical mutual exclusion methods attempt to reduce latency and busy-waits by using queuing and context switches. Some claim that benchmarks indicate that these special algorithms waste more time than they save.

Many forms of mutual exclusion have side-effects. For example, classic semaphores permit deadlocks, in which one process gets a semaphore, another process gets a second semaphore, and then both wait forever for the other semaphore to be released. Other common side-effects include starvation, in which a process never gets sufficient resources to run to completion, priority inversion in which a higher priority thread waits for a lower-priority thread, and "high latency" in which response to interrupts is not prompt.

Much research is aimed at eliminating the above effects, such as by guaranteeing non-blocking progress. No perfect scheme is known.

References

  • Michel Raynal: Algorithms for Mutual Exclusion, MIT Press, ISBN 0-262-18119-3
  • Sunil R. Das, Pradip K. Srimani: Distributed Mutual Exclusion Algorithms, IEEE Computer Society, ISBN 0-8186-3380-8
  • Thomas W. Christopher, George K. Thiruvathukal: High-Performance Java Platform Computing, Prentice Hall, ISBN 0-13-016164-0

See also

Mutually exclusive

External links

Advertisements
Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: