Scroll to navigation

MPI_GRAPH_CREATE(3) Open MPI MPI_GRAPH_CREATE(3)

MPI_Graph_create — Makes a new communicator to which topology information has been attached.

SYNTAX

C Syntax

#include <mpi.h>
int MPI_Graph_create(MPI_Comm comm_old, int nnodes, const int index[],

const int edges[], int reorder, MPI_Comm *comm_graph)


Fortran Syntax

USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GRAPH_CREATE(COMM_OLD, NNODES, INDEX, EDGES, REORDER,

COMM_GRAPH, IERROR)
INTEGER COMM_OLD, NNODES, INDEX(*), EDGES(*)
INTEGER COMM_GRAPH, IERROR
LOGICAL REORDER


Fortran 2008 Syntax

USE mpi_f08
MPI_Graph_create(comm_old, nnodes, index, edges, reorder, comm_graph,

ierror)
TYPE(MPI_Comm), INTENT(IN) :: comm_old
INTEGER, INTENT(IN) :: nnodes, index(nnodes), edges(*)
LOGICAL, INTENT(IN) :: reorder
TYPE(MPI_Comm), INTENT(OUT) :: comm_graph
INTEGER, OPTIONAL, INTENT(OUT) :: ierror


INPUT PARAMETERS

  • comm_old : Input communicator without topology (handle).
  • nnodes : Number of nodes in graph (integer).
  • index : Array of integers describing node degrees (see below).
  • edges : Array of integers describing graph edges (see below).
  • reorder : Ranking may be reordered (true) or not (false) (logical).

OUTPUT PARAMETERS

  • comm_graph : Communicator with graph topology added (handle).
  • ierror : Fortran only: Error status (integer).

DESCRIPTION

MPI_Graph_create returns a handle to a new communicator to which the graph topology information is attached. If reorder = false then the rank of each process in the new group is identical to its rank in the old group. Otherwise, the function may reorder the processes. If the size, nnodes, of the graph is smaller than the size of the group of comm_old, then some processes are returned MPI_COMM_NULL, in analogy to MPI_Cart_create and MPI_Comm_split. The call is erroneous if it specifies a graph that is larger than the group size of the input communicator.

The three parameters nnodes, index, and edges define the graph structure. nnodes is the number of nodes of the graph. The nodes are numbered from 0 to nnodes-1. The ith entry of array index stores the total number of neighbors of the first i graph nodes. The lists of neighbors of nodes 0, 1, …, nnodes-1 are stored in consecutive locations in array edges. The array edges is a flattened representation of the edge lists. The total number of entries in index is nnodes and the total number of entries in edges is equal to the number of graph edges.

The definitions of the arguments nnodes, index, and edges are illustrated with the following simple example.

Example: Assume there are four processes 0, 1, 2, 3 with the following adjacency matrix:

——- ——— Process Neighbors ——- ——— 0 1, 3 1 0 2 3 3 0, 2 ——- ———

Then, the input arguments are:

  • nodes = 4
  • index = 2, 3, 4, 6
  • edges = 1, 3, 0, 3, 0, 2

Thus, in C, index[0] is the degree of node zero, and index[i] - index[i-1] is the degree of node i, i=1, … , nnodes-1; the list of neighbors of node zero is stored in edges[j], for 0 <= j <= index[0] - 1 and the list of neighbors of node i, i > 0 , is stored in edges[j], index[i-1] <= j <= index[i] - 1.

In Fortran, index(1) is the degree of node zero, and index(i+1) - index(i) is the degree of node i, i=1, … , nnodes-1; the list of neighbors of node zero is stored in edges(j), for 1 <= j <= index(1) and the list of neighbors of node i, i > 0, is stored in edges(j), index(i) + 1 <= j <= index(i + 1).

ERRORS

Almost all MPI routines return an error value; C routines as the return result of the function and Fortran routines in the last argument.

Before the error value is returned, the current MPI error handler associated with the communication object (e.g., communicator, window, file) is called. If no communication object is associated with the MPI call, then the call is considered attached to MPI_COMM_SELF and will call the associated MPI error handler. When MPI_COMM_SELF is not initialized (i.e., before MPI_Init/MPI_Init_thread, after MPI_Finalize, or when using the Sessions Model exclusively) the error raises the initial error handler. The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF when using the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/MPI_Comm_spawn_multiple. If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all other MPI functions.

Open MPI includes three predefined error handlers that can be used:

  • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.
  • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When called on a communicator, it acts as if MPI_Abort was called on that communicator. If called on a window or file, acts as if MPI_Abort was called on a communicator containing the group of processes in the corresponding window or file. If called on a session, aborts only the local process.
  • MPI_ERRORS_RETURN Returns an error code to the application.

MPI applications can also implement their own error handlers by calling:

  • MPI_Comm_create_errhandler then MPI_Comm_set_errhandler
  • MPI_File_create_errhandler then MPI_File_set_errhandler
  • MPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_init
  • MPI_Win_create_errhandler then MPI_Win_set_errhandler

Note that MPI does not guarantee that an MPI program can continue past an error.

See the MPI man page for a full list of MPI error codes.

See the Error Handling section of the MPI-3.1 standard for more information.

SEE ALSO:

MPI_Graph_get


COPYRIGHT

2003-2024, The Open MPI Community

April 11, 2024