from mpi4py import MPI #import numpy comm = MPI.COMM_WORLD rank=comm.rank size=comm.size name=MPI.Get_processor_name() shared=(rank+1)*5 comm.send(shared,dest=(rank+1)%size) data=comm.recv(source=(rank-1)%size) print name print 'Rank:',rank print 'Recieved:',data,'which came from rank:',(rank-1)%size
Now we're going to talk about dynamically sending and receiving messages. Maybe you are running in a heavily distributed network, with available node counts that change often.
Maybe you just have a large number of nodes, and you don't want to hand-code something for each tiny node. You'll need to create algorithms to choose nodes that can scale with your network.
Here's an example algorithm that will always send to the next node up, and wrap around to the beginning when we reach the largest node number.
mpirun.openmpi -np 5 -machinefile /home/pi/mpi_testing/machinefile python ~/Desktop/sct/sct6.py