Recently, since the computing power and artificial intelligence (AI) technology are remarkably enhanced, new services based on AI technology has been actively emerging throughout the industry. Naturally, the types and number of problems that AI technology can solve have become more and more diverse and sophisticated, and the problem complexity has been higher. As a result, instead of optimizing the specific network to one single problem by offline, it is necessary for the single device to learn multiple tasks (or data) continually using general-purpose AI technology. In addition, as the problem complexity and diversity have been increasing, the amount of training data becomes larger and larger, and training deep neural network from the initial state for a long time using huge data does not fit most services that need to adapt to the user’s personal data quickly. To solve this problem, transfer learning and continual learning techniques are recently being spotlighted. Continual learning allows a single artificial neural network to quickly learn new tasks continually without forgetting previously learned knowledge. Autonomous intelligent agent can grow gradually by learning user-specific data continually. In this paper, we propose memory-based continual learning for autonomous intelligent agent. Specifically, based on Adaptive Resonance Theory (ART), a kind of unsupervised learning network, we have developed Deep ART that can learn a series of events as an episode. Moreover, we propose Developmental Resonance Network (DRN), which overcomes the shortcoming of the existing ART networks that the ART networks can learn input data, normalized between 0 and 1 only. We then combine two networks into Episodic Memory-DRN (EM-DRN), and this memory network has been applied to episodic memory for task performances of robot. To extend this technology, we propose EM-DRN-MAP (EDM) which maps multiple EM-DRNs for supervised learning in real-time, and demonstrate the effectiveness of EDM by applying it to the recipe recommendation system as the real-world problem. To overcome the problem of DRN that the clustering performance is highly sensitive on the change of model dynamics, D2RN is newly proposed. This network can learn the vigilance parameter itself, where this parameter is important to determine the model dynamics. As a result, we can get the robust clustering performances using D2RN without optimizing the model dynamics beforehand. Finally, we propose Convolutional Neural Network with Developmental Memory (CNN-DM) by adding developmental memory to the CNN model, which is one of deep neural networks for supervised learning. Each time a new image classification problem comes in, a new sub-memory is generated in DM to preserve the performances of old tasks. A new learning method, called developmental memory learning, is introduced to learn target task effectively. Future research direction is to improve CNN-DM to resolve the scalability issue and to propose a new learning scheme for continual learning of deep neural networks. Furthermore, we are going to apply transfer learning technology into deep reinforcement learning problem, which is more complex than image classification task.