Spin Locks

download Spin Locks

of 3

Transcript of Spin Locks

  • 8/6/2019 Spin Locks

    1/3

    Spin Locks

    Spin locks are a low-level synchronization mechanism suitable primarily for use onshared memory multiprocessors. When the calling thread requests a spin lock that isalready held by another thread, the second thread spins in a loop to test if the lock has

    become available. When the lock is obtained, it should be held only for a short time, asthe spinning wastes processor cycles. Callers should unlock spin locks before callingsleep operations to enable other threads to obtain the lock.

    Spin locks can be implemented using mutexes and conditional variables, but thepthread_spin_* functions are a standardized way to practice spin locking. Thepthread_spin_* functions require much lower overhead for locks of short duration.

    When performing any lock, a trade-off is made between the processor resourcesconsumed while setting up to block the thread and the processor resources consumed bythe thread while it is blocked. Spin locks require few resources to set up the blocking of a

    thread and then do a simple loop, repeating the atomic locking operation until the lock isavailable. The thread continues to consume processor resources while it is waiting.

    Compared to spin locks, mutexes consume a larger amount of processor resources to block the thread. When a mutex lock is not available, the thread changes its schedulingstate and adds itself to the queue of waiting threads. When the lock becomes available,these steps must be reversed before the thread obtains the lock. While the thread is

    blocked, it consumes no processor resources.

    Therefore, spin locks and mutexes can be useful for different purposes. Spin locks mighthave lower overall overhead for very short-term blocking, and mutexes might have lower

    overall overhead when a thread will be blocked for longer periods of time.

    What about using a lock variable, which must be tested by each process before it entersits critical section? If another process is already in its critical section, the lock is set to 1,and the process currently using the processor is not permitted to enter its critical section.If the value of the lock variable is 0 , then the process enters its critical section, and it setsthe lock to 1. The problem with this potential solution is that the operation that reads thevalue of the lock variable, the operation that compares that value to 0, and the operationthat sets the lock, are three different atomic actions. With this solution, it is possible thatone process might test the lock variable, see that it is open, but before it can set the lock,another process is scheduled, runs, sets the lock and enters its critical section. When theoriginal process returns, it too will enter its critical section, violation the policy of mutualexclusion.

    The only problem with the lock variable solution is that the action of testing the variableand the action of setting the variable are executed as seperate instructions. If theseoperations could be combined into one indivisible step, this could be a workable solution.These steps can be combined, with a little help from hardware, into what is known as aTSL or TEST and SET LOCK instruction. A call to the TSL instruction copies the

  • 8/6/2019 Spin Locks

    2/3

    value of the lock variable and sets it to a nonzero (locked) value, all in one step. Whilethe value of the lock variable is being tested, no other process can enter its criticalsection, because the lock is set. Let us look at an example of the TSL in use with twooperations, enter_region and leave_region:

    SemaphoresA hardware or software flag.In multitasking systems, a semaphore is a variable with a value that indicates the status of a common resource.Its used to lock the resource that is being used.A process needing the resource checks the semaphore to determine the resource's statusand then decides how to proceed.

    In programming, especially in UNIX systems, semaphores are a technique for coordinating or synchronizing activities

    in which multiple process compete for the same operating system resources.A semaphore is a value in a designated place in operating system (or kernel) storage thateach process can check and then change.Depending on the value that is found, the process can use the resource or will find that itis already in use and must wait for some period before trying again.Semaphones can be binary (0 or 1) or can have additional values.Typically, a process using semaphores checks the value and then, if it using the resource,changes the value to reflect this so that subsequent semaphore users will know to wait.Semaphores are commonly use for two purposes: to share a common memory space andto share access to files.Semaphores are one of the techniques for interprocess communication (interprocess

    communication).The C programming language provides a set of interfaces or "functions" for managingsemaphores.

    Semaphores are used to protect critical regions of code or data structures.Remember that each access of a critical piece of data such as a VFS inodedescribing a directory is made by kernel code running on behalf of a process. Itwould be very dangerous to allow one process to alter a critical data structurethat is being used by another process. One way to achieve this would be to use abuzz lock around the critical piece of data that is being accessed, but this is asimplistic approach that would degrade system performance.

    Instead Linux uses semaphores to allow just one process at a time to accesscritical regions of code and data; all other processes wishing to access thisresource will be made to wait until it becomes free. The waiting processes aresuspended, other processes in the system can continue to run as normal.

  • 8/6/2019 Spin Locks

    3/3

    Suppose the initial count for a semaphore is 1, the first process to come alongwill see that the count is positive and decrement it by 1, making it 0. Theprocess now ``owns'' the critical piece of code or resource that is beingprotected by the semaphore. When the process leaves the critical region itincrements the semphore's count. The most optimal case is where there are noother processes contending for ownership of the critical region. Linux hasimplemented semaphores to work efficiently for this, the most common case.

    If another process wishes to enter the critical region whilst it is owned by aprocess it too will decrement the count. As the count is now negative (-1) theprocess cannot enter the critical region. Instead it must wait until the owningprocess exits it. Linux makes the waiting process sleep until the owning processwakes it on exiting the critical region. The waiting process adds itself to thesemaphore's wait queue and sits in a loop checking the value of the waking fieldand calling the scheduler until waking is non-zero.

    The owner of the critical region increments the semaphore's count and if it is lessthan or equal to zero then there are processes sleeping, waiting for thisresource. In the optimal case the semaphore's count would have been returnedto its initial value of 1 and no further work would be neccessary. The owningprocess increments the waking counter and wakes up the process sleeping onthe semaphore's wait queue. When the waiting process wakes up, the wakingcounter is now 1 and it knows that it may now enter the critical region. Itdecrements the waking counter, returning it to a value of zero, and continues. Allaccess to the waking field of semaphore are protected by a buzz lock using thesemaphore's lock.