An Efficient I/O Aggregator Assignment Scheme for Collective I/O Considering Processor Affinity

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 393
  • Download : 0
As the number of processes in parallel applications increases, the importance of parallel I/O is also emphasized. Collective I/O is the specialized parallel I/O which provides the function of single-file-based parallel I/O. Collective I/O in popular message-passing interface (MPI) libraries follows a two-phase I/O scheme, in which the particular processes, namely I/O aggregators, perform important roles by engaging communications and I/O operations. Although there have been many previous works to improve the performance of collective I/O, it is hard to find a study about an I/O aggregator assignment considering multi-core architecture. Nowadays, many HPC systems use the multi-core system as a computational node. Therefore, it is important to understand the characteristics of multi-core architecture, such as processor affinity, in order to increase the performance of parallel applications. In this paper, we discovered that the communication costs in collective I/O were different according to the placement of I/O aggregators, where the computational nodes consisted of multi-core system and each node had multiple I/O aggregators. We also proposed a modified collective I/O scheme, in order to reduce the communication costs of collective I/O, by proper placement of I/O aggregators. The performance of our proposed scheme was examined on a Linux cluster system and the result demonstrated performance improvements in the range of 7.08% to 90.46% for read operations and 20.67% to 90.18% for write operations.
Publisher
IEEE
Issue Date
2011-09
Language
English
Citation

Scheduling and Resource Management for Parallel and Distributed Systems, pp.380 - 388

URI
http://hdl.handle.net/10203/169922
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0