Skip to content

Latest commit

 

History

History
168 lines (120 loc) · 6.02 KB

File metadata and controls

168 lines (120 loc) · 6.02 KB

UNIWA

UNIVERSITY OF WEST ATTICA
SCHOOL OF ENGINEERING
DEPARTMENT OF COMPUTER ENGINEERING AND INFORMATICS

University of West Attica · Department of Computer Engineering and Informatics


Introduction to Parallel Computing

Message Passing Interface (MPI)

Vasileios Evangelos Athanasiou
Student ID: 19390005

GitHub · LinkedIn


Supervision

Supervisor: Vasileios Mamalis, Professor

UNIWA Profile

Supervisor: Grammati Pantziou, Professor

UNIWA Profile · LinkedIn

Co-supervisor: Michalis Iordanakis, Academic Scholar

UNIWA Profile · Scholar


Athens, December 2022



README

Message Passing Interface (MPI)

This project implements a parallel system to check whether a sequence of integers is sorted in ascending order. It uses point-to-point communication between multiple processes without relying on collective MPI routines.

The main objective is to distribute a sequence of size N equally among p processors and identify whether the sequence is sorted, as well as detect the first element that violates the ascending order.


Table of Contents

Section Folder Description
1 assign/ Assignment material for the MPI (Message Passing Interface) laboratory
1.1 assign/PAR-LAB-EXER-I-2022-2023.pdf Laboratory exercise description in English
1.2 assign/ΠΑΡ-ΕΡΓ-ΑΣΚ-Ι-2022-2023.pdf Περιγραφή εργαστηριακής άσκησης (Greek)
2 docs/ Documentation and theoretical background on MPI
2.1 docs/Message-Passing-Interface.pdf MPI theory and concepts (EN)
2.2 docs/Διασύνδεση-Περασμάτων-Μηνυμάτων.pdf Θεωρία Διασύνδεσης Περασμάτων Μηνυμάτων (EL)
3 src/ Source code for MPI programs
3.1 src/mpi.c C implementation using the MPI programming model
4 README.md Project documentation
5 INSTALL.md Usage instructions

1. Features

  • Parallel Processing:
    Distributes the computational workload across multiple processors, allowing each process to check a sub-sequence concurrently.

  • Point-to-Point Communication:
    Uses MPI_Send and MPI_Recv to manually exchange data between processes, especially for boundary element verification.

  • Manager–Worker Model:

    • Process P₀ acts as the manager:
      • Reads user input
      • Distributes data
      • Collects results and prints the final output
    • Remaining processes act as workers.
  • Interactive Menu:
    Allows users to:

    • Run multiple sequence checks in succession
    • Exit the program gracefully through a menu interface

2. Technical Implementation

The program is written in C using the MPI (Message Passing Interface) environment.

2.1 Core MPI Routines Used

  • MPI_Init
    Initializes the MPI execution environment.

  • MPI_Comm_rank & MPI_Comm_size
    Determine the process ID and the total number of processes.

  • MPI_Send & MPI_Recv
    Enable blocking point-to-point communication for:

    • Data distribution
    • Boundary element comparison
  • MPI_Finalize
    Terminates the MPI environment cleanly.


3. Boundary Element Logic

To ensure the entire sequence is globally sorted:

  • Each process Pᵢ (except the last one) sends its last element to process Pᵢ₊₁.
  • Process Pᵢ₊₁ compares:
    • Its first element
    • With the last element of Pᵢ

This mechanism closes the gap created by simple parallel partitioning and ensures correct global ordering verification.


4. Conclusion

This project demonstrates how MPI point-to-point communication can be used to solve a classic sequence validation problem in parallel. It highlights key concepts such as data distribution, process synchronization, and boundary condition handling, making it a solid example of practical parallel programming with MPI.