Some Method for Constructing Cycles in a Graph

Abstract

Multidimensional matrices are a powerful tool for solving various fundamental scientific problems and applied scientific and technical problems. As examples of the use of multidimensional matrices in various fields, one can cite the work of the creator of the theory of multidimensional matrices N.P. Sokolov and other works, in-cluding those performed by the authors.
The problem of parallel implementation of the most complex operation of the algebra of multidimensional matrices, the (lm)-convolution product, is one of the most important. It is no coincidence that many publications are devoted to the problem of parallel multiplication of ordinary two-dimensional matrices. In these works, the problems of choosing a method for distributing matrix elements between processors and developing architectures of software and hardware systems for the effective implementation of this operation are considered.
For parallel multiplication of ordinary matrices, as a rule, two types of algorithms are used: band algorithms that implement element-wise matrix multiplication and block algorithms based on the Frobenius method. Many authors who have analyzed the efficiency of parallel matrix multiplication prefer block algorithms, arguing that the latter have a high degree of scalability. Taking into account the fact that scalability is the most important property of parallel algorithms and computing systems that implement them, in the future we will consider a parallel algorithm for block multiplication of multidimensional matrices.
The paper considers a generalization of the Cannon algorithm to the (lm)-convolution product of multi-dimensional matrices. It is proved that multidimensional matrices-operands can be represented as a set of sections by Scottish indices, the products of which allow one to obtain the required matrix-result. To split these sections into blocks and recalculate block indices during the execution of the algorithm, a method based on the use of specific number systems is proposed, and a method for calculating the bases of these number systems is determined. The process of executing the generalized Cannon algorithm is illustrated by block distribution tables at all stages of the algorithm execution.
The proposed generalization method can be extended to other block matrix multiplication algorithms.

Author Biographies

Victor Iosifovich Munerman, Smolensk State University

Associate Professor of the Chair of Computer Science, Faculty of Physics and Mathematics, Cand.Sci. (Eng.), Associate Professor

Daniel Victorovich Munerman, Smolensk State University

Laboratory Assistant of the Chair of Computer Science, Faculty of Physics and Mathematics

References

1. Sokolov N.P. Functions of multidimensional matrices and their applications for the solutions of linear systems of partial differential equations. Ukrainian Mathematical Journal. 1970;22(6):657-674. doi: https://doi.org/10.1007/BF01086271
2. Iljin P.L., Munerman V.I. Recursive Computation of the Multidimensional Matrix Determinant. Computer Mathematics Systems and Their Applications. 2019;(20-1):162-167. Available at: https://elibrary.ru/item.asp?id=39103176 (accessed 19.06.2022). (In Russ., abstract in Eng.)
3. Munerman V.I., Munerman D.V. About the correspondence of data models and calculation models. Computer Mathematics Systems and Their Applications. 2021;(22):146-152. Available at: https://elibrary.ru/item.asp?id=46649891 (accessed 19.06.2022). (In Russ., abstract in Eng.)
4. Mukha V.S. Multidimensional Matrix Approach in Parallel Factor Analysis. Journal of Automation and Information Sciences. 2006;38(10):21-29. doi: https://doi.org/10.1615/J Automat Inf Scien.v38.i10.30
5. Emelchenkov Ye.P., Munerman V.I., Munerman D.V., Samoilova T.A. The Object Oriented Approach to Designing Data Models. Modern Information Technologies and IT-Education. 2020;16(3):564-574. (In Russ., abstract in Eng.) doi: https://doi.org/10.25559/SITITO.16.202003.564-574
6. Goil S., Choudhary A. PARSIMONY: An Infrastructure for Parallel Multidimensional Analysis and Data Mining. Journal of Parallel and Distributed Computing. 2001;61(3):285-321. doi: https://doi.org/10.1006/jpdc.2000.1691
7. Munerman V., Munerman D., Samoilova T. The Heuristic Algorithm For Symmetric Horizontal Data Distribution. In: 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus). St. Petersburg, Moscow, Russia: IEEE Computer Society; 2021. p. 2161-2165. doi: https://doi.org/10.1109/ElConRus51938.2021.9396510
8. Efimov S.S. [Review of methods for parallelization of algorithms for solving some problems of computational discrete mathematics]. Mathematical Structures and Modeling. 2007;(17):72-93. Available at: https://elibrary.ru/item.asp?id=21034721 (accessed 19.06.2022). (In Russ.)
9. Younis A.A., Sunderraman R., Metzler M., Bourgeois A.G. Developing parallel programming and soft skills: A project based learning approach. Journal of Parallel and Distributed Computing. 2021;158(C):151-163. doi: https://doi.org/10.1016/j.jpdc.2021.07.015
10. Treschev I.A. Construction of multiflow supplements for unparalleling of excess algorithms. Information Science and Control Systems. 2008;(1):151-159. Available at: https://elibrary.ru/item.asp?id=37034414 https://elibrary.ru/item.asp?id=10366217 (accessed 19.06.2022). (In Russ., abstract in Eng.)
11. Ivanov D.A. [Parallel multiplication of matrices with complex numbers]. In: Proceedings of the 14th International Conference on Energy-2019. Vol. 5. Ivanovo: ISPU; 2019. p. 60. Available at: https://elibrary.ru/item.asp?id=41191835&pff=1 (accessed 19.06.2022). (In Russ.)
12. Fineman J.T. Nearly work-efficient parallel algorithm for digraph reachability. In: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC'2018). New York, NY, USA: Association for Computing Machinery; 2018. p. 457-470. doi: https://doi.org/10.1145/3188745.3188926
13. van der Klis M.H., Tellings J.L. Multidimensional scaling and linguistic theory. arXiv:2012.04946. 2020. doi: https://doi.org/10.48550/arXiv.2012.04946
14. Trobec R., Slivnik B., Bulić P., Robič B. Overview of Parallel Systems. In: Introduction to Parallel Computing. Undergraduate Topics in Computer Science. Cham: Springer; 2018. p. 9-44. doi: https://doi.org/10.1007/978-3-319-98833-7_2
15. Klyueva E.G., Adamov A.A., Ospanova A.E., Snitsar L.R., Kulbaeva L.N. Studying the optimal form of partitioning the data for the matrix multiplication on three fully connected heterogeneous processors with different bandwidths. Modern high technologies. 2019;(2):83-88. Available at: https://elibrary.ru/item.asp?id=37034414 (accessed 19.06.2022). (In Russ., abstract in Eng.)
16. Chen Y., Li K., Yang W., Xiao G., Xie X., Li T. Performance-Aware Model for Sparse Matrix-Matrix Multiplication on the Sunway TaihuLight Supercomputer. IEEE Transactions on Parallel and Distributed Systems. 2019;30(4):923-938. doi: https://doi. org/10.1109/TPDS.2018.2871189
17. Glushan V.M., Krasyuk O.I., Lozovoy A.Yu. Parallel multiplication of large-dimensional matrices on many given processors. Bulletin of the Ryazan State Radio Engineering University. 2020;(74):42-55. (In Russ., abstract in Eng.) doi: https://doi.org/10.21667/1995-4565-2020-74-42-55
18. Dorta I., Leon C., Rodriguez C. A comparison between MPI and OpenMP Branch-and-Bound skeletons. In: Eighth International Workshop on High-Level Parallel Programming Models and Supportive Environments, 2003. Proceedings., Nice, France; 2003. p. 66-73. doi: https://doi.org/10.1109/HIPS.2003.1196496
19. Van De Geijn R.A., Watts J. SUMMA: scalable universal matrix multiplication algorithm. Concurrency: Practice and Experience. 1997;9(4):255-274. doi: https://doi.org/10.1002/(SICI)1096-9128(199704)9:4<255::AID-CPE250>3.0.CO;2-2
20. Choi J., Walker D.W., Dongarra J.J. PUMMA: Parallel Universal Matrix Multiplication Algorithms on distributed memory concurrent computers. Concurrency: Practice and Experience. 1994;6(7):543-570. doi: https://doi.org/10.1002/cpe.4330060702
21. Alam K.S., Shishir T.A., Azharul Hasan K.M. Efficient Partitioning Algorithm for Parallel Multidimensional Matrix Operations by Linearization. In: Senjyu T., Mahalle P.N., Perumal T., Joshi A. (eds.). Information and Communication Technology for Intelligent Systems. ICTIS 2020. Smart Innovation, Systems and Technologies. Vol. 195. Singapore: Springer; 2021. p. 141-149. doi: https://doi. org/10.1007/978-981-15-7078-0_13
22. Fujiki D., Mahlke S., Das R. In-Memory Data Parallel Processor. ACM SIGPLAN Notices. 2018;53(2):1-14. doi: https://doi.org/10.1145/3296957.3173171
23. Liu J.-Sh., Lin J.-Yu., Chung Y.-C. Efficient parallel algorithms for multi-dimensional matrix operations. In: Proceedings International Symposium on Parallel Architectures, Algorithms and Networks (I-SPAN 2000). Dallas, TX, USA: IEEE Computer Society; 2000. p. 224-229. doi: https://doi.org/10.1109/ISPAN.2000.900289
24. Pan V.Y., Yu Y., Stewart C. Algebraic and Numerical Techniques for the Computation of Matrix Determinants. Computers & Mathematics with Applications. 1997;34(1):43-70. doi: https://doi.org/10.1016/S0898-1221(97)00097-7
25. Munerman V.I. Construction of hardware-software complexes architecture to improve massively data processing. Highly Available Systems. 2015;11(2):13-18. Available at: https://www.elibrary.ru/item.asp?id=23819273 (accessed 19.06.2022). (In Russ., abstract in Eng.)
Published
2022-10-24
How to Cite
MUNERMAN, Victor Iosifovich; MUNERMAN, Daniel Victorovich. Some Method for Constructing Cycles in a Graph. Modern Information Technologies and IT-Education, [S.l.], v. 18, n. 3, p. 566-577, oct. 2022. ISSN 2411-1473. Available at: <http://sitito.cs.msu.ru/index.php/SITITO/article/view/896>. Date accessed: 03 july 2024. doi: https://doi.org/10.25559/SITITO.18.202203.566-577.
Section
Parallel and distributed programming, grid technologies, programming on GPUs

Most read articles by the same author(s)