Recent studies have reported that deep learning techniques could achieve high performance in medical image analysis. However, there is a limitation in interpreting the diagnostic decisions of deep learning due to the black-box nature. To increase confidence in the decisions of deep learning and the usability in real-world deployments, it is required to develop a new interpretable deep network. In this study, a novel deep network, named interpretationlet network, has been devised to visually interpret the diagnostic decisions of computer-aided diagnosis (CAD). The proposed method could make a diagnostic decision by indicating the important areas on the region-of-interest (ROI) image. Based on the observation that the radiologists usually make a diagnostic decision using the lesion characteristics (i.e. margin and shape in breast masses), the proposed method decomposes the visual interpretation into a margin interpretationlet and a shape interpretationlet. In order to guide the interpretationlet to represent meaningful information, the training method is proposed by using the BI-RADS (Breast Imaging-Reporting and Data system) mass lexicon is used. To verify the effectiveness of the proposed method, comparative experiments have been conducted on a public mammogram database. Experimental results show that the proposed method could provide the interpretable visual evidence in the deep network. The proposed interpretationlet network achieved comparable diagnostic performance compared with the other methods. These results imply that the proposed interpretationlet network could be a promising approach to develop the explainable CAD.