Modification and Migration of Components of the Grid-Site
Abstract
All components of a grid site are interconnected by a number of tasks - to securely receive, process, store and transmit data, as well as meet certain requirements for the software of grid sites that are in the pool of large international experiments, install the most current versions of middleware, and operating systems that support the required grid services. The relevance of the software components is of great importance, since the state of the grid site as a whole depends on their performance. ATLAS, CMC, ALICE, LHCb and other international experiments monitor the state of their pool Grid sites for the relevance of their software components. Today, international experiments such as ATLAS and ALICE recommend their grid sites to migrate the most important component of a grid site - a computing element to ARC-CE or HTCondor-CE. This article discusses the migration of the legacy CREAM-CE component to HTCondor-CE, as well as the modification of the associated components of the grid site. The choice of the HTCondor-CE software product was determined by the quick and effective support from the developer at all stages of installation and testing, not a complicated scheme for installing and configuring HTCondor-CE services, and high-quality technical documentation of this product. Testing of the modified nodes of the AZ-IFAN grid site was carried out using the monitoring system of the data center of the Institute of Physics of the National Academy of Sciences of Azerbaijan based on the Zabbix platform, as well as the EGI monitoring system based on the Nagios platform. After receiving positive test results, the performance of HTCondor-CE services was assessed using specialized monitoring tools of the Harvester system, the ATLAS experiment (CERN).
References
2.Bondyakov A.S., Huseynov N.A., Guliyev J.A., Kondratyev A.O. Migration the services of computing nodes of the AZ-IFAN grid site on Scientific Linux 7. Sovremennye informacionnye tehnologii i IT-obrazovanie = Modern Information Technologies and IT-Education. 2019; 15(3):611-618. (In Russ., abstract in Eng.) DOI: https://doi.org/10.25559/SITITO.15.201903.611-618
3.Ryu G., Noh S.-Y. Establishment of new WLCG Tier Center using HTCondor-CE on UMD middleware. EPJ Web of Conferences: 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018). 2019; 214:08020. (In Eng.) DOI: https://doi.org/10.1051/epjconf/201921408020
4.Weitzel D., Bockelman B. Contributing opportunistic resources to the grid with HTCondor-CE-Bosco. Journal of Physics: Conference Series. 2017; 898:092026. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/9/092026
5.Bockelman B., Cartwright T., Frey J., Fajardo E.M., Lin B., Selmeci M., Tannenbaum T., Zvada M. Commissioning the HTCondor-CE for the Open Science Grid. Journal of Physics: Conference Series. 2015; 664:062003. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/664/6/06200
6.Bockelman B., Bejar J.C., Hover J. Interfacing HTCondor-CE with OpenStack. Journal of Physics: Conference Series. 2017; 898:092021. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/9/092021
7.Forti A.C., Walker R., Maeno T., Love P., Rauschmayr N., Filipcic A., Di Girolamo A. Memory handling in the ATLAS submission system from job definition to sites limits. Journal of Physics: Conference Series. 2017; 898:052004. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/5/052004
8.Berghaus F., Casteels K., Driemel C., Ebert M., Galindo F.F., Leavett-Brown C., MacDonell D., Paterson M., Seuster R., Sobie R.J., Tolkamp S., Weldon J. High-Throughput Cloud Computing with the Cloudscheduler VM Provisioning Service. Computing and Software for Big Science. 2020; 4:4. (In Eng.) DOI: https://doi.org/10.1007/s41781-020-0036-1
9.Taylor R.P. et al. Consolidation of cloud computing in ATLAS. Journal of Physics: Conference Series. 2017; 898:052008. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/5/052008
10.Amoroso A. et al. A modular (almost) automatic set-up for elastic multi-tenants’ cloud (micro)infrastructures. Journal of Physics: Conference Series. 2017; 898:082031. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/8/082031
11.Charpentier P. LHC Computing: past, present and future. EPJ Web of Conferences: 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018). 2019; 214:09009. (In Eng.) DOI: https://doi.org/10.1051/epjconf/201921409009
12.Bondyakov A.S. Infrastructure and main tasks of the data-center of the institute of physics of the National Academy of Sciences of Azerbaijan. CEUR Workshop Proceedings. 2017; 1787:150-155. Available at: http://ceur-ws.org/Vol-1787/150-155-paper-25.pdf (accessed 27.01.2021). (In Russ., abstract in Eng.)
13.Bondyakov A.S. The Basic Modes of the Intrusion Prevention System (IDS/IPS Suricata) for the Computing Cluster. Sovremennye informacionnye tehnologii i IT-obrazovanie = Modern Information Technologies and IT-Education. 2017; 13(3):31-37. (In Russ., abstract in Eng.) DOI: https://doi.org/10.25559/SITITO.2017.3.629
14.Filozova A., Bashashin M.V., Korenkov V.V., Kuniaev S.V., Musulmanbekov G., Semenov R.N., Shestakova G.V., Strizh T.A., Ustenko P.V., Zaikina T.N. Concept of JINR Corporate Information System. Physics of Particles and Nuclei Letters. 2016; 13(5):625-628. (In Eng.) DOI: https://doi.org/10.1134/S1547477116050204
15.Aiftimiei D.C., Fattibene E., Gargana R., Panella M., Salomoni D. Abstracting application deployment on Cloud infrastructures. Journal of Physics: Conference Series. 2017; 898(8):082053. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/8/082053
16.Taylor R.P., Berghaus F., Brasolin F., Cordiero C.J.D., Desmarais R., Field L., Gable I., Giordano D., Di Girolamo A., Hover J., LeBlanc M., Love P., Paterson M., Sobie R., Zaytsev A. The Evolution of Cloud Computing in ATLAS. Journal of Physics: Conference Series. 2015; 664:022038. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/664/2/022038
17.Baranov A.V., Balashov N.A., Kutovskiy N.A., Semenov R.N. JINR cloud infrastructure evolution. Physics of Particles and Nuclei Letters. 2016; 13(5):672-675. (In Eng.) DOI: https://doi.org/10.1134/S1547477116050071
18.Baranov A.V., Korenkov V.V., Yurchenko V.V., Balashov N.A., Kutovskiy N.A., Semenov R.N., Svistunov S.Y. Approaches to cloud infrastructures integration. Computer Research and Modeling. 2016; 8(3):583-590. (In Russ., abstract in Eng.) DOI: https://doi.org/10.20537/2076-7633-2016-8-3-583-590
19.Barreiro Megino F.H. et al. PanDA for ATLAS distributed computing in the next decade. Journal of Physics: Conference Series. 2017; 898:052002. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/5/052002
20.Blomer J. et al. New directions in the CernVM file system. Journal of Physics: Conference Series. 2017; 898:062031. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/6/062031
21.Charpentier P. Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06. Journal of Physics: Conference Series. 2017; 898:082011. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/898/8/082011
22.Furano F., Keeble O., Field L. Dynamic federation of grid and cloud storage. Physics of Particles and Nuclei Letters. 2016; 13(5):629-633. (In Eng.) DOI: https://doi.org/10.1134/S1547477116050186
23.Berghaus F. et al. Federating distributed storage for clouds in ATLAS. Journal of Physics: Conference Series. 2018; 1085:032027. (In Eng.) DOI: https://doi.org/10.1088/1742-6596/1085/3/032027
24.Ebert M. et al. Using a dynamic data federation for running Belle-II simulation applications in a distributed cloud environment. EPJ Web of Conferences: 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018). 2019; 214:04026. (In Eng.) DOI: https://doi.org/10.1051/epjconf/201921404026
25.Abramson D., Parashar M., Arzberger P. Translation computer science – Overview of the special issue. Journal of Computational Science. 2020; 52:101227. (In Eng.) DOI: https://doi.org/10.1016/j.jocs.2020.101227
26.Fajardo E., Wuerthwein F., Bockelman B., Livny M., Thain G., Clark J.A., Couvares P., Willis J. Adapting LIGO workflows to run in the Open Science Grid. SoftwareX. 2021; 14:100679. (In Eng.) DOI: https://doi.org/10.1016/j.softx.2021.100679
27.Bockelman B., Livny M., Lin B., Prelz F. Principles, Technologies, and Time: The Translational Journey of the HTCondor-CE. Journal of Computational Science. 2021; 52:101213. (In Eng.) DOI: https://doi.org/10.1016/j.jocs.2020.101213

This work is licensed under a Creative Commons Attribution 4.0 International License.
Publication policy of the journal is based on traditional ethical principles of the Russian scientific periodicals and is built in terms of ethical norms of editors and publishers work stated in Code of Conduct and Best Practice Guidelines for Journal Editors and Code of Conduct for Journal Publishers, developed by the Committee on Publication Ethics (COPE). In the course of publishing editorial board of the journal is led by international rules for copyright protection, statutory regulations of the Russian Federation as well as international standards of publishing.
Authors publishing articles in this journal agree to the following: They retain copyright and grant the journal right of first publication of the work, which is automatically licensed under the Creative Commons Attribution License (CC BY license). Users can use, reuse and build upon the material published in this journal provided that such uses are fully attributed.