Big Data = Clear + Dirty +
Dark Data
Big Data = Clear + Dirty +
Dark Data
Author(s): Kamila Migdał-Najman, Krzysztof NajmanSubject(s): Economy
Published by: Wydawnictwo Uniwersytetu Ekonomicznego we Wrocławiu
Keywords: Big Data; Clear Data; Dirty Data; Dark Data
Summary/Abstract: The development of technology data communications, the Internet and computer with the simultaneous decrease the unit costs of data collection and storage results in significant quantitative and qualitative changes in the approach to the same data, and the possibility of their analysis. The increasingly dense, continuous and unstructured data stream, called Big Data, evokes a lot of emotion today. On the one hand, the lack of adequate quantities of data has always been a challenge for the methods of statistical inference and one of the stimuli of their development. On the other hand, the large sets included threats to the reliability of the inference. In such collections, in addition to data of sufficient quality (Clear Data), the data which are inaccurate, outdated, noisy, often repeatedly duplicate, incomplete or erroneous (Dirty Data), as well as data about which quality or usability nothing is known (Dark Date) have a significante share. The aim of this study is to present the structure of the critical qualitative set of Big Data.
Journal: Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu
- Issue Year: 2017
- Issue No: 469
- Page Range: 131-139
- Page Count: 9
- Language: Polish