Models of attention posit that attentional priority is established by summing the saliency and relevancy signals from feature-selective maps. The dimension-weighting account further hypothesizes that information from each feature-selective map is weighted based on expectations of how informative each dimension will be. In the current studies, we investigated the question of whether attentional biases to the features of a conjunction target (color and orientation) differ when one dimension is expected to be more diagnostic of the target. In a series of color-orientation conjunction search tasks, observers saw an exact cue for the upcoming target, while the probability of distractors sharing a target feature in each dimension was manipulated. In one context, distractors were more likely to share the target color, and in another, distractors were more likely to share the target orientation. The results indicated that despite an overall bias toward color, attentional priority to each target feature was flexibly adjusted according to distractor context: RT and accuracy performance was better when the diagnostic feature was expected than unexpected. This occurred both when the distractor context was learned implicitly and explicitly. These results suggest that feature-based enhancement can occur selectively for the dimension expected to be most informative in distinguishing the target from distractors.