Introduction to MPI programming

CS 300, Parallel and Distributed Computing (PDC)
Due Wednesday, January 7, 2015

Introduction to MPI programming

Preliminary material

Laboratory exercises

  1. Create a directory for your work on this lab.

  2. Set up passwordless SSH from your account on the link computers to the cumulus head node, and to the head node(s) of other cluster(s). (This is a convenience, providing a secure way to access the clusters without having to enter your password often.)

  3. Carry out the steps of the Beowiki page Beowiki page Trapezoid_Tutorial in order to log into one of the Beowulf clusters, create an MPI application trap.cpp for computing a trapeziodal approximation of an integral, and run that application with n=8 processes.

    Notes:

  4. Create a copy of that program named trap2.cpp, which includes the following modifications.

  5. Standalone vs. parallel execution. The C++ class Trap in the program trap.cpp encapsulates the computation necessary to approximate a definite integral in calculus using trapezoids. Note that this class's method approxIntegral() computes a trapezoidal approximation for any values a, b, and n, whether or not parallel programming is used.

  6. Output from nodes. The mpirun program that we are using to execute MPI jobs captures and merges the standard output streams from all of the nodes (processing elements). Test this by adding an output statement that includes the MPI node identifier my_rank just after each node determines its each value of integral.

  7. Profiling. Postponed

  8. Collective communication using Reduce(). The program trap.cpp uses MPI::COMM_WORLD methods Send() and Recv() to communicate from one process to another. These are examples of point-to-point communication, in which a particular source process sends a message to a particular destination process, and that destination process receives that message.

    MPI also provides for collective communication, in which an entire group of processes participates in the communication. One example of collective communication is the MPI Bcast() method ("broadcast"), in which one process sends a message to all processes in an MPI communicator (such as COMM_WORLD) in a single operation.

    Broadcasting won't help with the communication needed in trap.cpp, but a different MPI communication method called Reduce() performs exactly the communication that trap.cpp needs. In a Reduce() operation, all processes in a communicator provide a value by calling the Reduce() method, and one designated destination process receives a combination of all those values, combined using an operation such as addition or maximum.

    Modify the program trap2.cpp to create a new version trapR.cpp that uses collective communication via Reduce() instead of point-to-point communication with Send() and Recv() in order to communicate and add up the partial sums. A single Reduce() call will replace all the code for Sending, Recving, and summing integral values. According to the manual page, a method call MPI::COMM_WORLD.Reduce() has the following arguments (in order):

    Note: All processes in the communicator must provide all arguments for their Reduce() calls, even the combined value is stored only in the root process's result argument. MPI makes this requirement in order for the Reduce() call to have the same format for all processes in a communicator. A Reduce() call can be thought of as a single call by an entire communicator, rather than individual calls by all the processes in that communicator.

  9. Instance vs. class methods. Observe that we never actually create an instance of the class Trap in the programs trap.cpp and trap2.cpp. Instead, we call the class method approxIntegral using the scope operator :: (Trap::approxIntegral()). The keyword static in the definition of the method approxIntegral indicates that it is a class method.

    Note: The MPI operations Init, Send, Receive, etc., are also implemented as class methods, making it unnecessary to create an instance of the class MPI.

    Create a new version trap3.cpp of the trapezoid program in which approxIntegral is an instance method instead of a class method, as a review exercise in C++ programming. This will require you to create an instance trap of the class Trap and perform the call trap.approxIntegral(...) in order to compute a trapezoidal approximation. (To change a class method to an instance method, remove the keyword static.) Compile trap3.cpp for parallel computation (with mpiCC) and test it using a Slurm job.

  10. Data decomposition (parallel programming pattern).

    A parallel programming pattern is an effective, reusable strategy for designing parallel computations. Such strategies, which have typically been discovered and honed through years of practical experience in industry and research, can apply in many different applied contexts and programming technologies. Patterns act as guides for problem solving, the essential activity in Computer Science, and they provide a way for students to gain expertise quickly in the challenging and complex world of parallel programming.

    This lab exhibits several parallel programming patterns:

    • The Data Parallel pattern appears in our approach to dividing up the basic work of the problem: to add the areas of all the trapezoids, we break up the data so that each processing element (e.g., core in a cluster node) carries out a portion of the summation, and together they span the entire summation. We will also call this the Data decomposition pattern.

    • The Message Passing pattern is represented in the point-to-point communication of MPI Send and Receive operations used in these codes.

    • The reduce approach to programming the problem used in trapR.cpp represents a Collective Communication pattern.

    Data Decomposition is considered to be a higher level pattern than the others. Higher level patterns are closer to the application, and lower level patterns are closer to the hardware. Observe that we referred to the applied problem of computing areas by adding up trapezoids when describing how Data Decomposition relates to our problem, but Message Passing and Collective Communication are more directly concerned with the mechanics of computer network communication and of performing arithmetic with results produced at multiple hosts.

    As the program standalone.cpp indicates, the original Trap class does not implement any of the Data decomposition of the problem -- all the subdivision of intervals in trap.cpp occurs in main(). In this step of the lab, we will move the data decomposition into the class Trap, making main() easier to write.

    Start by making a copy trap4.cpp of your program trap3.cpp. Modify the class Trap in trap4.cpp as follows:

    Finally, modify main() in trap4.cpp to use the new class. The MPI calls will remain in main() but the Data Decomposition involving local_a, etc., can all be deleted, leading to a more easily understood main program.

    Compile (using mpiCC) and test your program (using a Slurm job).

    Reusability of patterns.

    This same class Trap might be used with some other form of parallelization besides MPI. This illustrates another aspect of patterns like Data Decomposition: these effective problem-solving strategies can be used with many different parallel programming technologies.

Deliverables

  1. To submit your work, the first step is to set up git on the cluster head node(s) you used. (You might as well set up git on all of the cluster head nodes now.) For example, on cumulus,

    These steps need only be done once per host on which you want to submit via git.

  2. Now that you have git set up on all the computers you used, you have two options for how to submit your work on this lab.

    1. The simpler approach is to rename your subdirectories according to which computer the work is located on. For example,

      %  cd ~/PDC
      %  mv lab1 lab1-link
      
      cumulus$  cd ~/PDC
      cumulus$  mv lab1 lab1-cumulus 
      
      After these renamings, perform the following instructions on all hosts you used for the assignment:
      %  cd ~/PDC
      %  git pull origin master
      %  git add -A
      %  git commit -m "lab1 submit on link"
      %  git push origin master
      
      cumulus$  cd ~/PDC
      cumulus$  git pull origin master
      cumulus$  git add -A
      cumulus$  git commit -m "lab1 submit on cumulus"
      cumulus$  git push origin master
      
      If you use this approach, be sure it is clear which versions of files are the current ones. For example, in this lab, there may be versions of trap.c on both the Link and cumulus; which one is final, or are they both the same. One way to record this is to delete old versions, or copy them to a sub-subdirectory named old. Or, you could create text files named README.txt in each appropriate lab1 directory to indicate the status of your files.

    2. If you prefer to store all of your files in one lab1 directory, here is an alternative approach to submitting.

      We will use git's branch feature for this approach, which support multiple lines of development in a single git project, because ______ (details to follow)

Also, fill out this form to report on your work. (Optional in 2015.)


This lab is due by Wednesday, January 7, 2015.