This paper considers a surface-normal learning algorithm referred to as few-shot kernel associative domain adaptation (FS-KADA) that reduces the domain shift between abundant synthetic source normals and a few real target normals. The FS-KADA takes an unpaired source and target samples as input and captures invariant representations. However, models trained on synthetically rendered normals do not perform well when accurately predicting real environmental normals due to the domain shift. To address this issue, a contextual weighting is considered for learning FS-KADA on the neighborhood of target ground truth, with kernel association in latent spaces and smoothing at predictions. FS-KADA is evaluated on both a real outdoor target dataset (SNOW) and real indoor datasets (NYUv2) using a synthetic indoor dataset (MLT). The state-of-the-art performance was observed on the SNOW dataset. The performance of FS-KADA using a single ground truth of a randomly selected pixel in each image of the NYUv2 is compared with others using the full ground truth.