To adopt deep neural networks in resource constrained edge devices, various energy-and memory-efficient embedded accelerators have been proposed. However, most off-the-shelf networks are well-trained with vast amounts of data, but unexplored users’ data or accelerator’s constraints can lead to unexpected accuracy loss. Therefore, a network adaptation suitable for each user and device is essential to make a high confidence prediction in given environment. We propose simple but efficient data reformation methods that can effectively reduce the communication cost with off-chip memory during the adaptation. Our proposal utilizes the data’s zero-centered distribution and spatial correlation to concentrate the sporadically spread bit-level zeros to the units of value. Consequently, we reduced communication volume by up to 55.6% per task with an area overhead of 0.79% during the personalization training.