370CT: Distributed Programming - 2

Dr Carey Pridgeon

2016-08-06

Created: 2017-03-08 Wed 18:01

MPI Basic Functionality

Printing out hostnames

  • In OpenMP we can print the Id of the thread we are currently in.
  • MPI lets us do the same, only for compute nodes. We will start by doing this.
  • To try this out you will need to be able to compile a program to run using the MPI Library.
  • Log into Nostromo, create a folder for this work, then using wget, fetch the files listed in exercise sheet d2.
  • Once you have them, use the code in the sheet to write the program you need to complete this exercise.

Sending data

  • This exercise just involved generating fairly trivial data on the target node, which isn't really how we use clusters. We need to process data on the nodes.
  • To do this we need to send data to the nodes, and retreive it.

MPI Datatypes

  • MPI can't actually manipulate datatypes as defined for specific languages. It has it own, and converts to and from those. You won't use all of them, or need to know them, but here they are.
MPI datatype C++ datatype
MPI::CHAR char
MPI::WCHAR wchar_t
MPI::SHORT signed short
MPI::INT signed int

MPI Datatypes - 2

MPI::LONG signed long
MPI::SIGNED_CHAR signed char
MPI::UNSIGNED_CHAR unsigned char
MPI::UNSIGNED_SHORT unsigned short
MPI::UNSIGNED unsigned int
MPI::UNSIGNED_LONG unsigned long int

MPI Datatypes - 3

MPI::FLOAT float
MPI::DOUBLE double
MPI::LONG_DOUBLE long double
MPI::BOOL bool
MPI::COMPLEX Complex<float>

MPI Datatypes - 4

MPI::DOUBLE_COMPLEX Complex<double>
MPI::LONG_DOUBLE_COMPLEX Complex<long double>
MPI::BYTE  
MPI::PACKED  

Structs and MPI

  • MPI does not do Objects at all, nor can it do structs, but it can transform structs into its own equivilent packed multi type container.
  • But that's getting a bit ahead of ourselves. We will cover that next week in detail.
  • For now lets step back and focus on a simple INT to send from one node to another.
  • After we've done that we'll cover converting to and from C++ datatypes and using them.

Ranks

  • A computer (or compute node)'s position in an MPI heirarchy is its rank, which is an integer from 0 to n.
  • We can use this rank in our program logic, as the exercise we are about to follow will demonstrate. The first exercise did too, but not as explicitelly.
  • Specifically we can set code that will only be executed on a particular node.
  • Note that this is different from OpenMP, where we had no ability to to state what code ran in what thread.

Send and Receive

  • For our first exercise where we manipulate data across the cluster, follow the directions in exercise d3.
  • the data are sent from one node and received by another node, with the code to do the sending and receiving only executing if we are on the correct node.