In the recent studies of data augmentation of neural networks, the application of test time augmentation
has been studied to extract optimal transformation policies to enhance performance with minimum cost.
The policy search method with the best level of input data dependency involves training a loss predictor
network to estimate suitable transformations for each of the given input image in independent manner,
resulting in instance-level transformation extraction. In this work, we propose a method to utilize and
modify the loss prediction pipeline to further improve the performance with the cyclic search for suitable
transformations and the use of the entropy weight method. The cyclic usage of the loss predictor allows
refining each input image with multiple transformations with a more flexible transformation magnitude.
For cases where multiple augmentations are generated, we implement the entropy weight method to
reflect the data uncertainty of each augmentation to force the final result to focus on augmentations with
low uncertainty. The experimental result shows convincing qualitative outcome and robust performance
for the corrupted conditions of data.