Various Approaches Proposed for Eliminating Duplicate Data in a System
Various Approaches Proposed for Eliminating Duplicate Data in a System
Author(s): Roman Čerešňák, Karol Matiaško, Adam DudášSubject(s): Information Architecture, ICT Information and Communications Technologies, Transport / Logistics
Published by: Žilinská univerzita v Žilině
Keywords: duplication data; distributed data processing; software architecture;
Summary/Abstract: The growth of big data processing market led to an increase in the overload of computation data centers, change of methods used in storing the data, communication between the computing units and computational time needed to process or edit the data. Methods of distributed or parallel data processing brought new problems related to computations with data which need to be examined. Unlike the conventional cloud services, a tight connection between the data and the computations is one of the main characteristics of the big data services. The computational tasks can be done only if relevant data are available. Three factors, which influence the speed and efficiency of data processing are - data duplicity, data integrity and data security. We are motivated to study the problems related to the growing time needed for data processing by optimizing these three factors in geographically distributed data centers.
Journal: Komunikácie - vedecké listy Žilinskej univerzity v Žiline
- Issue Year: 23/2021
- Issue No: 4
- Page Range: 223-232
- Page Count: 10
- Language: English