An Accelerated Edge Cloud System for Energy Data Stream Processing Based on Adaptive Incremental Deep Learning Scheme

Cited 1 time in webofscience Cited 4 time in scopus
  • Hit : 342
  • Download : 268
As smart metering technology evolves, power suppliers can make low-cost, low-risk estimation of customer-side power consumption by analyzing energy demand data collected in real-time. With advances network infrastructure, smart sensors, and various monitoring technologies, a standardized energy metering infrastructure, called advanced metering infrastructure (AMI), has been introduced and deployed to urban households to allow them to develop efficient power generation plans. Compared to traditional stochastic approaches for time-series data analysis, deep-learning methods have shown superior accuracy on many prediction applications. Because smart meters and infrastructure monitors produce a series of measurements over time, a large amount of data is accumulated, creating a large data stream, which takes much time from data generation to deployment of deep-learning model training. In this article, we propose an accelerated computing system that considers time-variant properties for accurate prediction of energy demand by processing the AMI stream data. The proposed system is a real-time training/inference system that deploys AMI data over a distributed edge cloud. It comprises two core components: an adaptive incremental learning solver and deep-learning acceleration with FPGA-GPU resource scheduling. An adaptive incremental learning scheme adjusts the batch/epoch in training iteration to reduce the time delay of the latest trained model, while trying to prevent biased-training due to the sub-optimal optimizer of incremental learning. In addition, a resource scheduling scheme manages various accelerator resources for accelerated deep-learning processing while minimizing the computational cost. The experimental results demonstrated that our method achieved good performance for adaptive batch size and epoch for incremental learning while guaranteeing a low inference error, a high model score, and queue stability with cost efficient processing.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2020-10
Language
English
Article Type
Article
Citation

IEEE ACCESS, v.8, no.8, pp.195341 - 195358

ISSN
2169-3536
DOI
10.1109/access.2020.3033771
URI
http://hdl.handle.net/10203/277215
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
000587867000001.pdf(4.89 MB)Download
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0