Convolutional Neural Network (CNN) models have achieved state-of-art performances in various computer vision tasks. However, it has been shown that there exist adversarial perturbations, that can fool CNN classifiers when added to an input image, while they are almost imperceptible to human eyes. After that, it turned out that there exist malicious universal adversarial perturbations, which are image-agnostic and can fool CNN classifiers when added to any input image. In most real-world cases, attackers cannot access the target model. Therefore, most attacks are performed under black-box settings, where attackers rely on the transferability. Thus, we propose a new method to increase the attack success rates of a universal adversarial perturbation (UAP) under black-box settings by conducting Dual Random Transformations (DRT). We improved the transferability of universal adversarial perturbations by performing different random transformations to input images and universal adversarial perturbations. DRT showed remarkable performance improvement under black-box settings, compared to applying the same transformation to images and the perturbation. DRT method also demonstrated improved transferability when combined with MI, TI, and SI methods.