Deep neural networks (DNNs) are widely used for image recognition, speech recognition, and other pattern analysis tasks. Despite the success of DNNs, these systems can be exploited by what is termed adversarial examples. An adversarial example, in which a small distortion is added to the input data, can be designed to be misclassified by the DNN while remaining undetected by humans or other systems. Such adversarial examples have been studied mainly in the image domain. Recently, however, studies on adversarial examples have been expanding into the voice domain. For example, when an adversarial example is applied to enemy wiretapping devices (victim classifiers) in a military environment, the enemy device will misinterpret the intended message. In such scenarios, it is necessary that friendly wiretapping devices (protected classifiers) should not be deceived. Therefore, the selective adversarial example concept can be useful in mixed situations, defined as situations in which there is both a classifier to be protected and a classifier to be attacked. In this paper, we propose a selective audio adversarial example with minimum distortion that will be misclassified as the target phrase by a victim classifier but correctly classified as the original phrase by a protected classifier. To generate such examples, a transformation is carried out to minimize the probability of incorrect classification by the protected classifier and that of correct classification by the victim classifier. We conducted experiments targeting the state-of-the-art DeepSpeech voice recognition model using Mozilla Common Voice datasets and the Tensorflow library. They showed that the proposed method can generate a selective audio adversarial example with a 91.67% attack success rate and 85.67% protected classifier accuracy.