DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Maeng, Seung-Ryoul | - |
dc.contributor.advisor | 맹승렬 | - |
dc.contributor.author | Tcheun, Myoung-Kwon | - |
dc.contributor.author | 천명권 | - |
dc.date.accessioned | 2011-12-13T05:24:25Z | - |
dc.date.available | 2011-12-13T05:24:25Z | - |
dc.date.issued | 1998 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=134781&flag=dissertation | - |
dc.identifier.uri | http://hdl.handle.net/10203/33103 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 전산학과, 1998.2, [ viii, 78 p. ] | - |
dc.description.abstract | The performance of processor has increased dramatically over the past decade and has outperformed that of main memory. As a result, the main memory access latency becomes an obstacle to achieve high performance computing. In large-scale multiprocessors with a general interconnection network, the program execution time significantly depends on the shared-memory access latency which consists of the memory access latency and the network latency. The shared-memory access latency reaches to tens to hundreds of processor cycles with the advent of very fast uniprocessors and massively parallel systems. The most part of this latency comes from the large network latency associated with the traversal of the processor-memory interconnect. Caches are quite effective to reduce and hide the main memory access latency in uniprocessor systems and the shared-memory access latency in shared-memory multiprocessors. However, the remained cache miss penalty is still a serious bottleneck to achieve high performance computing. Prefetching is an attractive scheme to reduce the cache miss penalty by exploiting the overlap of processor computations with data accesses. Especially for multiprocessors, cache miss penalty can be decreased significantly by overlapping the network latency of fetched block with those of prefetched blocks. Many prefetching schemes based on software or hardware have been proposed. Software prefetching schemes perform static program analysis and insert explicitly prefetch instructions into the program code, which increases the program size. In contrast, hardware prefetching schemes control the prefetch activities according to the program execution by only hardware. Several hardware prefetching schemes prefetch blocks if a regular access pattern is detected. These schemes require complex hardware to detect a regular access pattern. Prefetch on misses [29] is a simple hardware scheme, but needs a miss to prefetch one block. Thus this scheme reduces the miss rate ... | eng |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | The prefetching degree | - |
dc.subject | Sequential prefetching | - |
dc.subject | Shared-memory multiprocessors | - |
dc.subject | Sequential streams | - |
dc.subject | 순차적 스트림 | - |
dc.subject | 선인출 양 | - |
dc.subject | 순차적 선인출 | - |
dc.subject | 다중처리기 | - |
dc.title | (An) adaptive sequential prefetching scheme in shared-memory multiprocessors | - |
dc.title.alternative | 공유메모리 다중처리기하에서 선인출 양을 조절하는 순차적 선인출 방식 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 134781/325007 | - |
dc.description.department | 한국과학기술원 : 전산학과, | - |
dc.identifier.uid | 000935363 | - |
dc.contributor.localauthor | Maeng, Seung-Ryoul | - |
dc.contributor.localauthor | 맹승렬 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.