Large-scale incremental processing for mapreduce맵리듀스를 위한 대규모 점진적 처리에 대한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 626
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorMaeng, Seung-Ryoul-
dc.contributor.advisor맹승렬-
dc.contributor.authorLee, Dae-Woo-
dc.contributor.author이대우-
dc.date.accessioned2015-04-23T08:30:33Z-
dc.date.available2015-04-23T08:30:33Z-
dc.date.issued2014-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=568602&flag=dissertation-
dc.identifier.urihttp://hdl.handle.net/10203/197814-
dc.description학위논문(박사) - 한국과학기술원 : 전산학과, 2014.2, [ vii, 69 p. ]-
dc.description.abstractAn important property of today`s big data processing is that the same computation is often repeated on datasets evolving over time, such as web and social network data. This style of repeated computation is also used for many iterative algorithms. While repeating full computation of the entire datasets is feasible with distributed computing frameworks such as Hadoop, it is obviously inefficient and wastes resources.In this dissertation, we present HadUP (Hadoop with Update Processing), a modified Hadoop architecture tailored to large-scale incremental processing for conventional MapReduce algorithms. Several approaches have been proposed to achieve a similar goal using task-level memoization. They keep the previous results of tasks permanently, and reuse them when the same computation on the same task input is needed again. However, task-level memoization detects the change of datasets at a coarse-grained level, which often makes such approaches ineffective. Our analysis reveals that task-level memoization can be effective only if each task processes a few KB of input data. In contrast, HadUP detects and computes the change of datasets at a fine-grained level using deduplication-based snapshot differential algorithm (D-SD) and update propagation.Update propagation is a key primitive for efficient incremental processing of HadUP. Many applications for today`s big data processing consist of data parallel operations, where an operation transforms one or more input datasets into one output dataset. For each operation, the same computation is concurrently applied to a single input record or a group of input records. The independence between these executions allows us to compute the records to be inserted into or deleted from the output dataset, if those records inserted into or deleted from the input datasets are explicitly given. In this way, update propagation computes the updated result without full recomputation. Our evaluation shows that HadUP provides high pe...eng
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectBig data processing-
dc.subject중복 제거-
dc.subject하둡-
dc.subject맵리듀스-
dc.subject점진적 처리-
dc.subject빅데이터 처리-
dc.subjectIncremental processing-
dc.subjectMapReduce-
dc.subjectHadoop-
dc.subjectData deduplication-
dc.titleLarge-scale incremental processing for mapreduce-
dc.title.alternative맵리듀스를 위한 대규모 점진적 처리에 대한 연구-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN568602/325007 -
dc.description.department한국과학기술원 : 전산학과, -
dc.identifier.uid020037429-
dc.contributor.localauthorMaeng, Seung-Ryoul-
dc.contributor.localauthor맹승렬-
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0