Filter Data Cache: An Energy-Efficient Small L0 Data Cache Architecture Driven by Miss Cost Reduction

Cited 6 time in webofscience Cited 9 time in scopus
  • Hit : 537
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Jongminko
dc.contributor.authorKim, Soon-Taeko
dc.date.accessioned2015-07-22T04:51:55Z-
dc.date.available2015-07-22T04:51:55Z-
dc.date.created2015-07-08-
dc.date.created2015-07-08-
dc.date.issued2015-07-
dc.identifier.citationIEEE TRANSACTIONS ON COMPUTERS, v.64, no.7, pp.1927 - 1939-
dc.identifier.issn0018-9340-
dc.identifier.urihttp://hdl.handle.net/10203/199999-
dc.description.abstractOn-chip cache memories play an important role in resource-constrained embedded systems by filtering out most off-chip memory accesses. Because cache latency and energy consumption are generally proportional to cache sizes, a small cache at the top level of the memory hierarchy is desirable. Previous work has presented a novel cache architecture called a filter cache to reduce hit time and energy consumption of the L1 instruction cache. However, consideration to the data cache requires a different approach and has not been researched much. In this paper, we propose a filter data cache architecture to effectively adopt the filter cache to the data cache hierarchy. We observed that cache misses occur considerably and they are likely to be continuous when the filter cache is used for the data cache. Those misses cost performance and energy consumption by increasing cache latency and uploading unnecessary data. The proposed filter data cache architecture reduces miss costs using three schemes: early cache hit predictor (ECHP), locality-based allocation (LA), and No Tag Matching Write (NTW). Experimental results show that the proposed filter data cache reduces energy consumption of the data caches by 21% compared with the filter cache, and the energy consumption of the ALU by 27.2 percentage on average. The overheads in terms of area and leakage power are small and the proposed filter data cache architecture does not hurt performance.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.subjectPERFORMANCE-
dc.subjectDESIGN-
dc.subjectSYSTEM-
dc.titleFilter Data Cache: An Energy-Efficient Small L0 Data Cache Architecture Driven by Miss Cost Reduction-
dc.typeArticle-
dc.identifier.wosid000355989900010-
dc.identifier.scopusid2-s2.0-84933055328-
dc.type.rimsART-
dc.citation.volume64-
dc.citation.issue7-
dc.citation.beginningpage1927-
dc.citation.endingpage1939-
dc.citation.publicationnameIEEE TRANSACTIONS ON COMPUTERS-
dc.identifier.doi10.1109/TC.2014.2349503-
dc.contributor.localauthorKim, Soon-Tae-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorFilter data cache-
dc.subject.keywordAuthorearly cache hit predictor-
dc.subject.keywordAuthorlocality-based allocation-
dc.subject.keywordAuthorNo Tag-matching Write-
dc.subject.keywordPlusPERFORMANCE-
dc.subject.keywordPlusDESIGN-
dc.subject.keywordPlusSYSTEM-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0