Correct solution of the Dining Philosophers problem
Split SPLIT_ID
CS 300, Parallel and Distributed Computing (PDC)
Due Monday, October 8, 2018
Correct solution of the Dining Philosophers problem
Strategy?
NEXT TIME: refactor SyncServer
to have instance
methods instead of class methods, to avoid having to allocate state
variables elsewhere.
Preliminary material
Laboratory exercises
The goal of this lab is to implement at least one of the solutions to the Dining Philosophers problem discussed in class.
Log into an admin node for a cluster, and create subdirectories
~/PDC/lab4
, then change into thelab4
subdirectory.In order to manage the code in the
SyncServer
class more easily, refactor your synchronization server (i.e., rewrite that code to have a better design) as follows.Copy
dp.h
from yourlab3
subdirectory to yourlab4
subdirectory, then delete the keywordstatic
before therun()
method of the classSyncServer
. This makesrun()
an instance method instead of a class method.Test this modified version of
dp.h
by copying a running version ofdp.cpp
from the previous lab, and modifying the copy as follows.In the code for the synchronization server machine, replace the class-method call
SyncServer::run()
by a call of the instance methodrun()
for aSyncServer
object:SyncServer ss; ss.run();
Then compile and run
dp.cpp
, and verify that it operates as it did before.Create a
commit
recording this work as follows:cumulus$ cp dp.cpp dp1.cpp cumulus$ cp dp.h dp1.h
cumulus$ git add {dp,dp1}.{cpp,h} cumulus$ git commit -m "lab4: SyncServer::run as an instance method"
Make
buff[]
a state variable ofSyncServer
, instead of just a local variable inmain()
, so allSyncServer
methods will be able to access it.Test your code to see that it still runs correctly, perhaps using a program similar to the first simple test you wrote for
Sync.h
in the previous lab.Write a method
do_quit(int source)
for carrying out the following actions when aQUIT
message is received:char msg[] = "bye"; MPI_Send(msg, strlen(msg), MPI_CHAR, status.MPI_SOURCE, ACK, MPI_COMM_WORLD); cout << "Shutting down MPI_COMM_WORLD" << endl; MPI_Abort(MPI_COMM_WORLD, MPI_SUCCESS);
Now, replace the code for theQUIT
case by the following:case QUIT: do_quit(status.MPI_SOURCE); return 0; break;
Note: A helper method likedo_quit()
is sometimes called a handler or callback forQUIT
messages.Also replace the body of the
PRINT
case by a call to a three-line class methoddo_print(int source)
, followed by that case'sbreak;
. Test that the code still works after these changes.Proceed to replace the bodies of cases for
ECHO
,GETCHOP
,GETCHOP_BLOCKING
, andPUTCHOP
by calls to new callback methods, one for each of those operations.In the case of
PUTCHOP
,GETCHOP
, andGETCHOP_BLOCKING
, convert the arraychop[]
into a state variable.
If you add new services to the class
SyncServer
in the future, use this callback approach to keep the code well organized.Create a
commit
recording this work as follows:cumulus$ cp dp.cpp dp2.cpp cumulus$ cp dp.h dp2.h
cumulus$ git add {dp,dp2}.{cpp,h} cumulus$ git commit -m "lab4: Refactor SyncServer to use handler methods"
Algorithm 1: Waiter solution.
Here are some suggestions for the implementation.
Start with a copy
wdp.cpp
of yourdeadlock.cpp
from the previous lab, and make a copywdp.h
of your refactoreddp.h
(for this lab). Also modifywdp.cpp
to#include
the filewdb.h
instead ofdb.h
.Now change the callback
SyncServer::do_getchop_blocking()
to act as the waiter. That callback should:allow a requesting philosopher to obtain a chopstick c if at least two chopsticks remain unallocated, including c, in the array
chop[]
,allow a requesting philosopher to obtain its right chopstick even if that is the last unallocated chopstick,
(optional) allow a requesting philosopher to obtain its left chopstick c if some other philosopher currently possesses both of its chopsticks, as indicated in the array
chop[]
, even if c is the last unallocated chopstick, andotherwise refuse to allocate the chopstick c (i.e., if c is already allocated or c was the last unallocated chopstick in an "unsafe" state).
Test these changes using
wdp.cpp
(whose only change is one header file).Create a
commit
recording this work as follows:cumulus$ cp wdp.cpp wdp1.cpp cumulus$ cp wdp.h wdp1.h
cumulus$ git add {wdp,wdp1}.{cpp,h} cumulus$ git commit -m "lab4: Implement the waiter in SyncServer::do_getchop_blocking()"
Add an integer state variable
freechops
, initialized at the total number of chopsticks, to the classSyncServer
inwdb.h
.freechops
has the following invariant: At all times,freechops
holds the number of unallocated chopsticks inchops[]
.To maintain the invariant for
freechops
(i.e., that it holds the number of unallocated chopsticks), be sure to subtract 1 fromfreechops
every timedo_getchop()
allocates any chopstick, and add 1 tofreechops
every timedo_putchop()
deallocates any chopstick. The purpose offreechops
is to avoid having to count unallocated chopsticks in a loop withindo_getchop()
every time the waiterdo_getchop()
considers whether to allocate a free chopstick.Create a
commit
recording this work as follows:cumulus$ cp wdp.cpp wdp2.cpp cumulus$ cp wdp.h wdp2.h
cumulus$ git add {wdp,wdp2}.{cpp,h} cumulus$ git commit -m "lab4: Maintain counter freechops in SyncServer"
Algorithm 2: Chandy-Misra's solution. (OPTIONAL)
Design and implement the Chandy-Misra solution to the Dining Philosophers problem, described in Correct solution of the Dining Philosophers problem.
Summary of patterns in the MPI labs.
These are the patterns we have encountered so far.
High-level patterns
Data Decomposition - split the data into chunks for independent operations .
See Lab 1 (all versions of thetrap
code), Lab 2 (determining k) and patt_ref('mpi_primes', 'patt', 'checking long plans') ?>.Task Decomposition - decompose the instruction stream into independent simultaneous tasks .
See current lab and previous lab for Lab 3 (synchronization server and philosophers)
Mid-level patterns
Loop Parallel - .
See Lab 1 (trapezoidal estimations) and Lab 2 (big-primes.cpp
)SPMD - .
See Lab 1 (trapezoid estimations), Lab 2 (determining k andbig-primes
search for large primes), Lab 3 (philosopher processes), and Lab 4 (philosopher processs)Master-Worker - a task balances computational work among a set of other tasks .
See Lab 2 (head node and worker nodes)Scatter-Gather - data values are distributed to tasks then results collected from tasks in cycles .
See Lab 2 (mentioned in connection with Master-Worker)Shared Array - correctly sharing an array structure among concurrent tasks .
See Lab 3 (chop[]
) and Lab 4 (waiter[]
)
Low-level patterns
Message Passing - tasks communicate by sending and receiving messages of data .
See all 4 labsCollective Communication - tasks communicate by transmissions that all tasks participate in .
See Lab 1 (use ofMPI_Reduce
in lab4.) and Lab 2 (especially inbroadcast2.cpp
;. )Mutual Exclusion - synchronization in which no two tasks are simulataneously in their critical sections .
See Lab 3 (synchronization server protection ofchop
) and Lab 4 (implemetation of blocking by the waiter)
Deliverables
All of your code for this assignment should already be
contained in commit
s. Modify the most recent
commit
message to indicate that you are submitting
the completed assignment.
cumulus$ git commit --amendAdd the following to the latest
commit
message.
lab4: complete
If your assignment is not complete, indicate your progress instead, e.g.,lab4: items 1-5 complete, item 6 partialYou can make later commits to submit updates.
Finally, pull/push
your commit
s in
the usual way.
cumulus$ git pull origin master cumulus$ git push origin master
Submit your work using git
on the cluster head
node you used. Include the cluster name in your commit message, e.g.,
mist$ cd ~/PDC mist$ git pull origin master mist$ git add -A mist$ git commit -m "Lab 4 submit (mist)" mist$ git push origin masterAlso, fill out this form to report on your work.
See lab1 Deliverables
section if you need to set up the cluster you used for
git
.
If you did your work on two different clusters, submit work from both of them using one of the strategies in lab1 Deliverables.
This lab is due by Saturday, October 8, 2016.