370CT: Distributed Programming - 3

Dr Carey Pridgeon


Created: 2017-03-13 Mon 10:54

MPI Datatypes

MPI does not do Pointers

  • Pointers are machine specific, they are are always relative to some location on a specific machine
  • Move to another machine, and the address becomes much harder to interpret.
  • Not impossible perhaps, but pointlessly timewasting, so MPI doesn't do it.

MPI does not do Objects

  • Objects are already fairly complicated containers, so rather hard for any language or library to serialise and transmit.
  • MPI cannot understand objects at all. You could not, for instance, send a std::string from one node to another using MPI, although you could send the contents of a std::string.

MPI can't do Structs either

  • The contents of a struct are always positioned relative to the beginning of that struct.
  • The beginning position of the struct is unknown to the destination computer, MPI has no means to transmit this information, since it would only be relative position, and thus meaningless.

It's all relative really

  • Thease are mostly manifestations of the same fundamental problem, relative position in physical memory.
  • Processes are allocated a virtual memory block, whose addresses mean nothing outside that block.
  • MPI ignores this problem by transforming the data structures we are used to into blocks consisting of data, it's position relative to other pieces of data, it's type, and other housekeeping information required to convert it to and from the data structures our program code on the nodes uses.

Random scribbling on the board to attempt to demonstrate my point

  • why are you reading the slide, look at the board…..

MPI Datatypes

MPI composite type creation - 1

  • Starting with a struct like this
struct something {
 int awholenumber;
 int anotherwholenumber;
 int yetanotherwholenumber;
 bool atrueornotthing;
 float afloatynumber;
 float asecondfloatynumber;
 double alongerfloatynumber;
 double asecondlongerfloatynumber;
 char  awholelistofletters[10];

MPI composite type creation - 2

  • You need to extract the structural information needed to remove information from, and write information to, datatypes in C++ that match the one you're using.
  • So we need to know types there are (expressed as their MPI equivilents), what order they can be found in (this is very important), and how many of each to expect.
const int count = 5;  
MPI::Datatype typesInStruct[count] = {MPI::INT,MPI::bool,MPI::float,MPI::double,MPI::char};
int arrayBlockLengths [count] = {3,1,2,2,10};

MPI composite type creation - 3

  • Finally we need memory locations relative to some position MPI can understand. In this case, the start of a struct.
  • This might not always be 0, but it only needs to be a relative position, MPI can work the rest out to read or fill in the struct correctly, because it will only ever be doing it one variable or struct at a time.

MPI composite type creation - 4

//Specify starting memory of each block
MPI::Aint objAddress, address1,address2;
MPI::Aint arrayDisplacements[count];        
hostStruct sbuf;//Just has to be a struct instance but not
                // the one you're actually sending
objAddress = MPI::Get_address(&sbuf);
address1 = MPI::Get_address(&sbuf.hostName);
address2 = MPI::Get_address(&sbuf.id);
arrayDisplacements[0] = address1 - objAddress;
arrayDisplacements[1] = address2 - objAddress;

MPI composite type creation - 5

  • Now we have all the pieces we can create our MPI datatype to transport our data over the MPI network

    MPI::Datatype mpiHostStruct;
    mpiHostStruct = MPI::Datatype::Create_struct(count,arrayBlockLengths,arrayDisplacements,typesInStruct);