(An) adaptive sequential prefetching scheme in shared-memory multiprocessors공유메모리 다중처리기하에서 선인출 양을 조절하는 순차적 선인출 방식

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 399
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorMaeng, Seung-Ryoul-
dc.contributor.advisor맹승렬-
dc.contributor.authorTcheun, Myoung-Kwon-
dc.contributor.author천명권-
dc.date.accessioned2011-12-13T05:24:25Z-
dc.date.available2011-12-13T05:24:25Z-
dc.date.issued1998-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=134781&flag=dissertation-
dc.identifier.urihttp://hdl.handle.net/10203/33103-
dc.description학위논문(박사) - 한국과학기술원 : 전산학과, 1998.2, [ viii, 78 p. ]-
dc.description.abstractThe performance of processor has increased dramatically over the past decade and has outperformed that of main memory. As a result, the main memory access latency becomes an obstacle to achieve high performance computing. In large-scale multiprocessors with a general interconnection network, the program execution time significantly depends on the shared-memory access latency which consists of the memory access latency and the network latency. The shared-memory access latency reaches to tens to hundreds of processor cycles with the advent of very fast uniprocessors and massively parallel systems. The most part of this latency comes from the large network latency associated with the traversal of the processor-memory interconnect. Caches are quite effective to reduce and hide the main memory access latency in uniprocessor systems and the shared-memory access latency in shared-memory multiprocessors. However, the remained cache miss penalty is still a serious bottleneck to achieve high performance computing. Prefetching is an attractive scheme to reduce the cache miss penalty by exploiting the overlap of processor computations with data accesses. Especially for multiprocessors, cache miss penalty can be decreased significantly by overlapping the network latency of fetched block with those of prefetched blocks. Many prefetching schemes based on software or hardware have been proposed. Software prefetching schemes perform static program analysis and insert explicitly prefetch instructions into the program code, which increases the program size. In contrast, hardware prefetching schemes control the prefetch activities according to the program execution by only hardware. Several hardware prefetching schemes prefetch blocks if a regular access pattern is detected. These schemes require complex hardware to detect a regular access pattern. Prefetch on misses [29] is a simple hardware scheme, but needs a miss to prefetch one block. Thus this scheme reduces the miss rate ...eng
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectThe prefetching degree-
dc.subjectSequential prefetching-
dc.subjectShared-memory multiprocessors-
dc.subjectSequential streams-
dc.subject순차적 스트림-
dc.subject선인출 양-
dc.subject순차적 선인출-
dc.subject다중처리기-
dc.title(An) adaptive sequential prefetching scheme in shared-memory multiprocessors-
dc.title.alternative공유메모리 다중처리기하에서 선인출 양을 조절하는 순차적 선인출 방식-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN134781/325007-
dc.description.department한국과학기술원 : 전산학과, -
dc.identifier.uid000935363-
dc.contributor.localauthorMaeng, Seung-Ryoul-
dc.contributor.localauthor맹승렬-
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0