Think Stitch

Sharing Services by Interleaving Clients

Let's see the example model of mutexes explained in the SyncStitch Users Guide again.

The process MUTEX uses two set of events for two processes P and Q. One set is {lock.p, unlock.p} which is for the process P, and {lock.q, unlock.q} is for Q.

Each process uses the events (interfaces) prepared for itself: It firstly locks the mutex, then does its own work, after that releases the mutex. The work is described (modelled) with two events. In the case of the process P, they are p1 and p2. The reason why two events are used is to ensure that the operation inside the lock is taken place atomically.

The parallel composition for the system are defined as follows: Firstly the process P and Q, between which there is no direct interaction, are combined with the empty sync-list. After that the result of it and the process MUTEX are combined with all of the events specified as sync-list.

(define-process SYS
  (par (list lock.p unlock.p
             lock.q unlock.q)
    (par '() P Q)

The computation tree of the system process is as follows.

Abstract Interfaces by Renaming

Now, we rename the events prepared for two processes P and Q to the same events. In other words, we abstract the distinction between lock.p and lock.q, unlock.p and unlock.q. We define brand-new events lock and unlock, and rename both lock.p and lock.q to lock, unlock.p and unlock.q to unlock.

(define-process SYSA
  (rename `((,lock.p . ,lock)
            (,unlock.p . ,unlock)
            (,lock.q . ,lock)
            (,unlock.q . ,unlock))

The computation tree of the renaming is as follows.

As you can see, mutual exclusion is properly functioning.

If the MUTEX could allow the owner of the lock to perform the lock operation again, this abstraction would not work.

However, this kind of abstraction is often convenient when there are more than one client and the service process need not care about who is in service.

MUTEX Model by Interleaving

Let's make a mutex process which does not distinguish client processes from scratch. We use interleaving. Different from the processes we have created so far, client processes share the same events to lock and unlock the mutex. They are not intended to synchronize on the events. The client processes are combined with interleaving.

As the following figure shows, combining processes with interleaving can be seen as OR-combination. (Generally, an arc between the lines means AND-condition on AND-OR-trees, but it means OR here.)

The MUTEX process

The MUTEX process can be defined as follow. It does not care who owns the lock.

Client processes

Client processes use the shared events to lock and unlock the mutex.

Parallel composition

The system process can be defined as follows:

(define-process SYS
  (par (list lock unlock)
    (par '() P Q)

In this case, the order to combine processes is important:

  1. At first, client processes which share the events must be combined with interleaving.
  2. Next, the result of the above and the MUTEX process are combined with the interface events specified as sync-list.

If you would combine one of the client processes and the MUTEX process first, the other client process would synchronize with not only the MUTEX process but also the first client process.

The computation tree of the system process is as follows:

Semaphore Model with Interleaving

Let me introduce another example using interleaving: a semaphore process.

Suppose there are a resource which can be used by at most three processes at the same time. We will control processes which use the resource using a semaphore. We assume there are five client processes.

We define the interface events of the semaphore: wait and signal. The wait event means permission (a.k.a. P-instruction), and the signal means termination of use of the resource (V-instruction).

We assign ID 0, 1, ... to each client process. We also assign events p.0, p.1, ... to each process to indicate its own work. The events are defined using define-event-list.

(define-event wait)
(define-event signal)

(define M 5)
(define N 3)

(define-event-list ps M "p")

The semaphore process SEM

The semaphore process SEM can be defined as follows. It has a state variable c which counts the number of processes passed through the semaphore. The initial value of the variable is N = 3. If the count c is not zero, the semaphore offers the event wait. If wait occurs, the count c is decremented. When the event signal is received, the count c is incremented.

The computation tree of the semaphore process SEM is as follows. It is very similar to the tree of the stack process explained in the User Guide.

Client processes

Each client process has its own ID k. The behaviours of the client processes are the same except the event which indicates the work of the process. A client process firstly generates the wait event to get permission to use the resource, then use the resource which is indicated by the event p.k. The event p.k is accessed as (list-ref ps k). Finally the client generates signal event to report the termination of its work.

(define-process (P k)
  (! wait
     (! (list-ref ps k)
        (! signal (P k)))))

Parallel compositions

The system process is defined with parallel compositions. Firstly all of client processes are combined with interleaving and then it is combined with the semaphore process with the interface events specified as sync-list. This is the same way when we created the model using MUTEX.

(define-process SYS
  (par (list wait signal)
    (xpar k (interval 0 M) '()
        (P k))))

Exploring the system process

The part of the computation tree of the system process is shown below. At first, every client has right to start its work. It cannot be controlled which client starts first.

At most three client processes can do the work at the same time. Even if the forth client tries to generate the wait event, the semaphore does not generate it and the client has to wait for it. The signal event can also occur in any order, which can be observed on the tree as diamond-like transitions.