HADOOP: How to Process the Big Data?

article

Ladislav Buřita, David Koblížek

University of Defence, Kounicova 65, 662 10 Brno, Czech Republic,

Tomas Bata University in Zlin, Mostní 5139, 760 01 Zlin, Czech Republic,

Abstract. The paper is concerned with scalable pre-processing of data using HADOOP that is a framework based on java for processing of large volumes of data, so called Big Data.. The first part is focused on explaining the main part of the HADOOP system, which includes distributed file system and MapReduce metod. In the second part is described the process of system installation and in the final part is explained the reason of the HADOOP experiment.

Keywords: HADOOP, Big Data, HDFS, MapReduce, Apache

document and its parts:

KIT-2013 Conference Proceedings
   HADOOP: How to Process the Big Data?

Source

  Article
cefme-lnk-arr.pngCefme changes cefme-lnk-arr.pngAbout us
cefme-lnk-arr.pngContributions welcome!
cefme-lnk-arr.pngFAQ cefme-lnk-arr.pngContacts cefme-lnk-arr.pngNATO & Research
                              
Powered by ATOM – web database ontology engine