370CT: Distributed Programming - 4

Dr Carey Pridgeon

2016-08-06

Created: 2017-03-15 Wed 14:34

Reduction and Sub Communicators

MPI::COMM::WORLD

  • Comm World handles communication between all the nodes (single computers, or processing units, if you are using multiple cores per machine). So Each node can talk to any other node, but ony via Comm World.
  • When we send a message, we send it to Comm World.
  • When we receive it, we receive not from the source node, but from Comm World, in its buffer that it reserved for the origin node.

Reduction - 1

  • Reduction works much like it does in OpenMP, but with a few significant differences.
  • That is to say, the result is the same, but the means by which the result is acheived is substantially different, and would be even if we used MPI only on a single machine in entirely paralell mode.

Reduction - 2

  • We do need to talk about the reduce function parameters a little
int MPI_Reduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype,
               MPI_Op op, int root, MPI_Comm comm)
  • This is from the code I've given you
MPI_Reduce(&my_sum,&sum,1,MPI_FLOAT,MPI_SUM,0,MPI_COMM_WORLD);
  • Random wittering about significance of various parameters follows.

Reduction - 3

  • [ MPI_MAX] maximum
  • [ MPI_MIN] minimum
  • [ MPI_SUM] sum
  • [ MPI_PROD] product

Reduction - 4

  • [ MPI_LAND] logical and
  • [ MPI_BAND] bit-wise and
  • [ MPI_LOR] logical or
  • [ MPI_BOR] bit-wise or

Reduction - 5

  • [ MPI_LXOR] logical xor
  • [ MPI_BXOR] bit-wise xor
  • [ MPI_MAXLOC] max value and location
  • [ MPI_MINLOC] min value and location

Sub Dividing the World Communicator

SubGroups - 1

  • We give MPI its total list of avalailable nodes on startup, thereafter we can split them up.
  • I have no exercises to demonstrate this, transferring data and your assignment are probably already too much at this point.
  • Why do it?
  • Sometimes you might have a lot of nodes in your cluster, and more than one problem to solve at once, so you create sub groups and assign tasks to them.
  • Each sub group has its own master node.

SubGroups - 2

comm_split.png http://mpitutorial.com/tutorials/introduction-to-groups-and-communicators/

SubGroups - 3

  • This is why all MPI function have a Communicator parameter, because you can have more than one.