Correct solution of the Dining Philosophers problem
Split SPLIT_ID
CS 300, Parallel and Distributed Computing (PDC)
Due Monday, October 8, 2018
Correct solution of the Dining Philosophers problem
Strategy?
NEXT TIME: refactor SyncServer to have instance
methods instead of class methods, to avoid having to allocate state
variables elsewhere.
Preliminary material
Laboratory exercises
The goal of this lab is to implement at least one of the solutions to the Dining Philosophers problem discussed in class.
Log into an admin node for a cluster, and create subdirectories
~/PDC/lab4, then change into thelab4subdirectory.In order to manage the code in the
SyncServerclass more easily, refactor your synchronization server (i.e., rewrite that code to have a better design) as follows.Copy
dp.hfrom yourlab3subdirectory to yourlab4subdirectory, then delete the keywordstaticbefore therun()method of the classSyncServer. This makesrun()an instance method instead of a class method.Test this modified version of
dp.hby copying a running version ofdp.cppfrom the previous lab, and modifying the copy as follows.In the code for the synchronization server machine, replace the class-method call
SyncServer::run()by a call of the instance methodrun()for aSyncServerobject:SyncServer ss; ss.run();
Then compile and run
dp.cpp, and verify that it operates as it did before.Create a
commitrecording this work as follows:cumulus$ cp dp.cpp dp1.cpp cumulus$ cp dp.h dp1.h
cumulus$ git add {dp,dp1}.{cpp,h} cumulus$ git commit -m "lab4: SyncServer::run as an instance method"Make
buff[]a state variable ofSyncServer, instead of just a local variable inmain(), so allSyncServermethods will be able to access it.Test your code to see that it still runs correctly, perhaps using a program similar to the first simple test you wrote for
Sync.hin the previous lab.Write a method
do_quit(int source)for carrying out the following actions when aQUITmessage is received:char msg[] = "bye"; MPI_Send(msg, strlen(msg), MPI_CHAR, status.MPI_SOURCE, ACK, MPI_COMM_WORLD); cout << "Shutting down MPI_COMM_WORLD" << endl; MPI_Abort(MPI_COMM_WORLD, MPI_SUCCESS);
Now, replace the code for theQUITcase by the following:case QUIT: do_quit(status.MPI_SOURCE); return 0; break;
Note: A helper method likedo_quit()is sometimes called a handler or callback forQUITmessages.Also replace the body of the
PRINTcase by a call to a three-line class methoddo_print(int source), followed by that case'sbreak;. Test that the code still works after these changes.Proceed to replace the bodies of cases for
ECHO,GETCHOP,GETCHOP_BLOCKING, andPUTCHOPby calls to new callback methods, one for each of those operations.In the case of
PUTCHOP,GETCHOP, andGETCHOP_BLOCKING, convert the arraychop[]into a state variable.
If you add new services to the class
SyncServerin the future, use this callback approach to keep the code well organized.Create a
commitrecording this work as follows:cumulus$ cp dp.cpp dp2.cpp cumulus$ cp dp.h dp2.h
cumulus$ git add {dp,dp2}.{cpp,h} cumulus$ git commit -m "lab4: Refactor SyncServer to use handler methods"Algorithm 1: Waiter solution.
Here are some suggestions for the implementation.
Start with a copy
wdp.cppof yourdeadlock.cppfrom the previous lab, and make a copywdp.hof your refactoreddp.h(for this lab). Also modifywdp.cppto#includethe filewdb.hinstead ofdb.h.Now change the callback
SyncServer::do_getchop_blocking()to act as the waiter. That callback should:allow a requesting philosopher to obtain a chopstick c if at least two chopsticks remain unallocated, including c, in the array
chop[],allow a requesting philosopher to obtain its right chopstick even if that is the last unallocated chopstick,
(optional) allow a requesting philosopher to obtain its left chopstick c if some other philosopher currently possesses both of its chopsticks, as indicated in the array
chop[], even if c is the last unallocated chopstick, andotherwise refuse to allocate the chopstick c (i.e., if c is already allocated or c was the last unallocated chopstick in an "unsafe" state).
Test these changes using
wdp.cpp(whose only change is one header file).Create a
commitrecording this work as follows:cumulus$ cp wdp.cpp wdp1.cpp cumulus$ cp wdp.h wdp1.h
cumulus$ git add {wdp,wdp1}.{cpp,h} cumulus$ git commit -m "lab4: Implement the waiter in SyncServer::do_getchop_blocking()"Add an integer state variable
freechops, initialized at the total number of chopsticks, to the classSyncServerinwdb.h.freechopshas the following invariant: At all times,freechopsholds the number of unallocated chopsticks inchops[].To maintain the invariant for
freechops(i.e., that it holds the number of unallocated chopsticks), be sure to subtract 1 fromfreechopsevery timedo_getchop()allocates any chopstick, and add 1 tofreechopsevery timedo_putchop()deallocates any chopstick. The purpose offreechopsis to avoid having to count unallocated chopsticks in a loop withindo_getchop()every time the waiterdo_getchop()considers whether to allocate a free chopstick.Create a
commitrecording this work as follows:cumulus$ cp wdp.cpp wdp2.cpp cumulus$ cp wdp.h wdp2.h
cumulus$ git add {wdp,wdp2}.{cpp,h} cumulus$ git commit -m "lab4: Maintain counter freechops in SyncServer"
Algorithm 2: Chandy-Misra's solution. (OPTIONAL)
Design and implement the Chandy-Misra solution to the Dining Philosophers problem, described in Correct solution of the Dining Philosophers problem.
Summary of patterns in the MPI labs.
These are the patterns we have encountered so far.
High-level patterns
Data Decomposition - split the data into chunks for independent operations .
See Lab 1 (all versions of thetrapcode), Lab 2 (determining k) and patt_ref('mpi_primes', 'patt', 'checking long plans') ?>.Task Decomposition - decompose the instruction stream into independent simultaneous tasks .
See current lab and previous lab for Lab 3 (synchronization server and philosophers)
Mid-level patterns
Loop Parallel - .
See Lab 1 (trapezoidal estimations) and Lab 2 (big-primes.cpp)SPMD - .
See Lab 1 (trapezoid estimations), Lab 2 (determining k andbig-primessearch for large primes), Lab 3 (philosopher processes), and Lab 4 (philosopher processs)Master-Worker - a task balances computational work among a set of other tasks .
See Lab 2 (head node and worker nodes)Scatter-Gather - data values are distributed to tasks then results collected from tasks in cycles .
See Lab 2 (mentioned in connection with Master-Worker)Shared Array - correctly sharing an array structure among concurrent tasks .
See Lab 3 (chop[]) and Lab 4 (waiter[])
Low-level patterns
Message Passing - tasks communicate by sending and receiving messages of data .
See all 4 labsCollective Communication - tasks communicate by transmissions that all tasks participate in .
See Lab 1 (use ofMPI_Reducein lab4.) and Lab 2 (especially inbroadcast2.cpp;. )Mutual Exclusion - synchronization in which no two tasks are simulataneously in their critical sections .
See Lab 3 (synchronization server protection ofchop) and Lab 4 (implemetation of blocking by the waiter)
Deliverables
All of your code for this assignment should already be
contained in commits. Modify the most recent
commit message to indicate that you are submitting
the completed assignment.
cumulus$ git commit --amendAdd the following to the latest
commit message.
lab4: complete
If your assignment is not complete, indicate your progress instead, e.g.,lab4: items 1-5 complete, item 6 partialYou can make later commits to submit updates.
Finally, pull/push your commits in
the usual way.
cumulus$ git pull origin master cumulus$ git push origin master
Submit your work using git on the cluster head
node you used. Include the cluster name in your commit message, e.g.,
mist$ cd ~/PDC mist$ git pull origin master mist$ git add -A mist$ git commit -m "Lab 4 submit (mist)" mist$ git push origin masterAlso, fill out this form to report on your work.
See lab1 Deliverables
section if you need to set up the cluster you used for
git.
If you did your work on two different clusters, submit work from both of them using one of the strategies in lab1 Deliverables.
This lab is due by Saturday, October 8, 2016.