In this work, we use a deep convolutional neural network (DCNN) trained with a public dataset, the Million Song Dataset, as a feature extractor. We trained the network from audio mel-spectrogram using artist labels in a discriminative manner. In particular, we used a large number of neurons in the output layer where each neuron represents an artist label. The output of the last hidden layer of the DCNN is regarded as an identity feature of the input data. The DCNN extracts feature vectors by taking 3-second audio segments as input, and summarizes them as an musical feature for the input audio. These extracted features are used for training a Support Vector Machine (SVM) classifier to perform MIREX audio classification tasks such as genre or mood classification. The results show that the proposed approach effectively captures general music audio features.