Time-sensitivity-aware shared cache architecture for multi-core embedded systems

Cited 1 time in webofscience Cited 1 time in scopus
  • Hit : 464
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Myoungjunko
dc.contributor.authorKim, Soontaeko
dc.date.accessioned2019-11-11T06:21:09Z-
dc.date.available2019-11-11T06:21:09Z-
dc.date.created2019-11-11-
dc.date.issued2019-10-
dc.identifier.citationJOURNAL OF SUPERCOMPUTING, v.75, no.10, pp.6746 - 6776-
dc.identifier.issn0920-8542-
dc.identifier.urihttp://hdl.handle.net/10203/268336-
dc.description.abstractIn embedded systems such as automotive systems, multi-core processors are expected to improve performance and reduce manufacturing cost by integrating multiple functions on a single chip. However, inter-core interference in shared last-level cache (LLC) results in increased and unpredictable execution times for time-sensitive tasks (TSTs), which have (soft) timing constraints, thereby increasing the deadline miss rates of such systems. In this paper, we propose a time-sensitivity-aware dead block-based shared LLC architecture to mitigate these problems. First, a time-sensitivity indication bit is added to each cache block, which allows the proposed LLC architecture to be aware of instructions/data belonging to TSTs. Second, portions of the LLC space are allocated to general tasks without interfering with TSTs by developing a time-sensitivity-aware dead block-based cache partitioning technique. Third, to reduce the deadline miss rate of TSTs further, we propose a task matching in shared caches and a cache partitioning scheme that considers the memory access characteristics and the time-sensitivity of tasks (TATS). The TATS is combined with our proposed dead block-based scheme. Our evaluation shows that the proposed schemes reduce deadline miss rates of TSTs compared to conventional shared caches. On a dual-core system, compared to a baseline, equal partitioning, and state-of-the-art quality-of-service-aware cache partitioning, our proposed dead block-based cache partitioning provides 9.3%, 30.5%, and 2.6% lower average deadline miss rates, respectively. On a quad-core system, compared to the baseline, equal partitioning, and state-of-the-art quality-of-service-aware cache partitioning, the combination of our proposed schemes provides 21.2%, 17.7%, and 4.1% lower average deadline miss rates, respectively.-
dc.languageEnglish-
dc.publisherSPRINGER-
dc.titleTime-sensitivity-aware shared cache architecture for multi-core embedded systems-
dc.typeArticle-
dc.identifier.wosid000492960000022-
dc.identifier.scopusid2-s2.0-85066062019-
dc.type.rimsART-
dc.citation.volume75-
dc.citation.issue10-
dc.citation.beginningpage6746-
dc.citation.endingpage6776-
dc.citation.publicationnameJOURNAL OF SUPERCOMPUTING-
dc.identifier.doi10.1007/s11227-019-02891-w-
dc.contributor.localauthorKim, Soontae-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorMulti-core-
dc.subject.keywordAuthorShared caches-
dc.subject.keywordAuthorQuality of service-
dc.subject.keywordAuthorCache partitioning-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0