table of contents
- bookworm 4.1.4-3
- testing 4.1.6-13.3
- unstable 5.0.5-3
- experimental 5.0.5-5
MPI_INIT_THREAD(3) | Open MPI | MPI_INIT_THREAD(3) |
MPI_Init_thread — Initializes the MPI execution environment
SYNTAX¶
C Syntax¶
#include <mpi.h> int MPI_Init_thread(int *argc, char ***argv,
int required, int *provided)
Fortran Syntax¶
USE MPI ! or the older form: INCLUDE 'mpif.h' MPI_INIT_THREAD(REQUIRED, PROVIDED, IERROR)
INTEGER REQUIRED, PROVIDED, IERROR
Fortran 2008 Syntax¶
USE mpi_f08 MPI_Init_thread(required, provided, ierror)
INTEGER, INTENT(IN) :: required
INTEGER, INTENT(OUT) :: provided
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
INPUT PARAMETERS¶
- argc: C only: Pointer to the number of arguments.
- argv: C only: Argument vector.
- required: Desired level of thread support (integer).
OUTPUT PARAMETERS¶
- provided: Available level of thread support (integer).
- ierror: Fortran only: Error status (integer).
DESCRIPTION¶
This routine, or MPI_Init, must be called before most other MPI routines are called. There are a small number of exceptions, such as MPI_Initialized and MPI_Finalized. MPI can be initialized at most once; subsequent calls to MPI_Init or MPI_Init_thread are erroneous.
MPI_Init_thread, as compared to MPI_Init, has a provision to request a certain level of thread support in required:
- MPI_THREAD_SINGLE
- Only one thread will execute.
- MPI_THREAD_FUNNELED
- If the process is multithreaded, only the thread that called MPI_Init_thread will make MPI calls.
- MPI_THREAD_SERIALIZED
- If the process is multithreaded, only one thread will make MPI library calls at one time.
- MPI_THREAD_MULTIPLE
- If the process is multithreaded, multiple threads may call MPI at once with no restrictions.
The level of thread support available to the program is set in provided. In Open MPI, the value is dependent on how the library was configured and built. Note that there is no guarantee that provided will be greater than or equal to required.
Also note that calling MPI_Init_thread with a required value of MPI_THREAD_SINGLE is equivalent to calling MPI_Init.
All MPI programs must contain a call to MPI_Init or MPI_Init_thread. Open MPI accepts the C argc and argv arguments to main, but neither modifies, interprets, nor distributes them:
/* declare variables */ MPI_Init_thread(&argc, &argv, req, &prov); /* parse arguments */ /* main program */ MPI_Finalize();
NOTES¶
The Fortran version does not have provisions for argc and argv and takes only IERROR.
It is the caller’s responsibility to check the value of provided, as it may be less than what was requested in required.
The MPI Standard does not say what a program can do before an MPI_Init_thread or after an MPI_Finalize. In the Open MPI implementation, it should do as little as possible. In particular, avoid anything that changes the external state of the program, such as opening files, reading standard input, or writing to standard output.
MPI_THREAD_MULTIPLE Support¶
MPI_THREAD_MULTIPLE support is included if the environment in which Open MPI was built supports threading. You can check the output of ompi_info(1) to see if Open MPI has MPI_THREAD_MULTIPLE support:
shell$ ompi_info | grep "Thread support"
Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes, OMPI progress: no, Event lib: yes) shell$
The MPI_THREAD_MULTIPLE: yes portion of the above output indicates that Open MPI was compiled with MPI_THREAD_MULTIPLE support.
Note that there is a small performance penalty for using MPI_THREAD_MULTIPLE support; latencies for short messages will be higher as compared to when using MPI_THREAD_SINGLE, for example.
ERRORS¶
Almost all MPI routines return an error value; C routines as the return result of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler associated with the communication object (e.g., communicator, window, file) is called. If no communication object is associated with the MPI call, then the call is considered attached to MPI_COMM_SELF and will call the associated MPI error handler. When MPI_COMM_SELF is not initialized (i.e., before MPI_Init/MPI_Init_thread, after MPI_Finalize, or when using the Sessions Model exclusively) the error raises the initial error handler. The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF when using the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/MPI_Comm_spawn_multiple. If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all other MPI functions.
Open MPI includes three predefined error handlers that can be used:
- MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.
- MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When called on a communicator, it acts as if MPI_Abort was called on that communicator. If called on a window or file, acts as if MPI_Abort was called on a communicator containing the group of processes in the corresponding window or file. If called on a session, aborts only the local process.
- MPI_ERRORS_RETURN Returns an error code to the application.
MPI applications can also implement their own error handlers by calling:
- MPI_Comm_create_errhandler then MPI_Comm_set_errhandler
- MPI_File_create_errhandler then MPI_File_set_errhandler
- MPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_init
- MPI_Win_create_errhandler then MPI_Win_set_errhandler
Note that MPI does not guarantee that an MPI program can continue past an error.
See the MPI man page for a full list of MPI error codes.
See the Error Handling section of the MPI-3.1 standard for more information.
SEE ALSO:
- MPI_Init
- MPI_Initialized
- MPI_Finalize
- MPI_Finalized
COPYRIGHT¶
2003-2024, The Open MPI Community
October 16, 2024 |