In the Latent Dirichlet Allocation (LDA) model, each image is represented by word distributions with their latent topics. Since the previous LDA-based models are not capable of dealing with the spatial information of visual words in images, this paper focuses on discovering the latent topics of images with visual saliency. To accomplish this, a saliency-weighted LDA (swLDA) model is proposed which incorporates visual saliency into the topic distribution of visual words, in a manner similar to human perception. The topic distributions of the visual words were learned with saliency weights reflecting whether the visual words were in the salient or non-salient regions. The experimental results demonstrate that the swLDA model effectively incorporates visual saliency as a focus of attention, mimicking human perception behavior, remarkably outperforming previous LDA models in terms of image categorization.