Models and methods for optimizing data processing in a multi-server environment using operation queues and chunking data authentication
DOI:
https://doi.org/10.15276/ict.01.2024.02Abstract
This work is motivated by the need to increase computational performance in modern industrial applications, including, but not limited to, dust flow simulation. As these tasks remain complex and continue to grow in complexity, the need to optimise the utilisation of multi-server system resources becomes increasingly evident, thereby requiring new approaches to improving data processing. The goal of this work is to develop an information technology that can be applied in more complex systems involving multiple servers, with a particular focus on the management of operation queues.
The proposed solution is aimed at optimising the distribution of computational loads across servers and improving modelling accuracy with respect to the time required for computations. This leads to increased computational efficiency, especially when working with large datasets. Another key aspect of this work is the development of a chunk-based model and method designed to enhance data authentication and integrity in distributed operations. This approach helps prevent potential IP spoofing and man-in-the-middle (MITM) attacks during data transmission within a local area network, ensuring the quality and consistency of distributed computational systems in a multi-server environment.
Experimental studies have confirmed the effectiveness of the proposed solution: it significantly reduces the time required to perform computations while simultaneously improving the viability of modelling processes. The management of operation queues, parallel data processing, and chunk-based authentication form the system architecture, which was implemented and demonstrated high effectiveness under real-world conditions. The proposed solution can be applied to optimise computer modelling processes, scientific computations, and audio/video conversion, as well as to address a wide range of other tasks that require scalability in the context of growing data volumes and increasing task complexity.