User-generated online reviews for products and services are becoming increasingly important for potential customers in making purchase decisions. At the same time, some online reviews are not trustworthy because some business owners hire people to generate fake reviews, making automatic sentiment analysis and summarization meaningless. Fake review detection however, is not easy for even humans, and therefore previous approaches to automatic detection only had a limited success. Noting from a previous study that people show factitious wring behaviors in writing deceptive reviews, which may cause a word selection process different from that used for writing truthful reviews, we propose a novel approach to fake detection, employing a generative model where word selections in writing documents are assumed to be affected by the topics selected by the writer. In other words, we assume that distinct features of fake reviews come from different “topic” distributions compared to truthful ones and attempt to detect fake reviews by comparing two topic distributions generated by LDA from truthful and fake review document sets. Using an evaluation corpus constructed from Yelp reviews in seven categories, such as ‘hotels’ and ‘restaurants’, we show our method outperforms a previously proposed word-based method by a significant margin and our method has little category dependency. We also make some semantic interpretation of result of topic modeling.