This paper considers an associative unsupervised domain adaptation learning algorithm for performing semantic segmentation on real urban drive-cam data using photo-realistic synthetic training data. To circumvent the difficulty of collecting and laboriously annotating a large amounts of real urban scene data, large amounts of computer-annotated synthetic training data is provided as a substitute; however, without any consideration to domain mismatch, a significant decreases in prediction performance is observed. Inspired by the recent success of an associative domain adaptation algorithm for simple classification, this algorithm is adapted to semantic segmentation to reduce domain mismatch between training and testing. Considering associative learning for multiple instances within a single high-resolution image and ambiguous and undecided labels in a semantic segmentation training dataset, this adaptation is not straightforward. In this paper, an algorithm is proposed to address such difficulties in adapting associative learning to semantic segmentation by partitioning an image into patches and associating labeled patches with unlabeled patches. The results from the model using SYNTHIA and GTA5 dataset as a source dataset shows state-of-the-art performance on the CityScapes dataset.