In this mpi4py tutorial, we're going to cover the gather command with MPI. The idea of gather is basically the opposite of scatter. Gather will be initiated by the master node and it will gather up all of the elements from the worker nodes.
We'll use almost an identical script as before with a few small changes. Let's say we scatter a bunch of data to the nodes, those nodes perform an operation on that data, and then we want the master node to collect the results. Here's how we'd do it:
from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() if rank == 0: data = [(x+1)**x for x in range(size)] print 'we will be scattering:',data else: data = None data = comm.scatter(data, root=0) data += 1 print 'rank',rank,'has data:',data newData = comm.gather(data,root=0) if rank == 0: print 'master:',newData
Here, all of the nodes are modifying the data variable. This data += 1 is our really intense operation that we want the nodes to perform in parallel! Next, we specify the gather command.
Gather works by specifying what we're gathering, and where the data will go (root), which we're saying is to processor 0.
This will show that the data was dispersed, the operation was performed, and the new data was correctly gathered back up.
mpirun.openmpi -np 5 -machinefile /home/pi/mpi_testing/machinefile python ~/Desktop/sct/sct10.py