Conversational agents (CAs) offer new functionality and convenience. While their sales have been soaring, they have also rapidly become victims of verbal abuse by their users. Without proper handling of abusive usage, abusers' actions can be reinforced and transferred to real life. This study investigates whether alternative response styles of empathy orientation and emotional expressivity of voice-activated virtual assistants influence users' moral emotions found to reduce verbal aggression as well as whether they affect user perceptions of the agent's capability. Ninety-eight participants were assigned to one of the three emotional expressivity conditions (no-facial expression, fixed-facial expression, varied-facial expression) and interacted with two alternative empathy orientation conditions (other-oriented, self-oriented) of agents. The experimental results show that, regardless of the emotional expressivity types, the agent's empathy orientation has a significant effect on the moral emotions and agent capability perceptions. Overall, an agent that employed other-oriented empathy style elicited most positive responses from the users. However, the preference was not across the board, as about one-third of the participants showed preference to the self-oriented CA. Users valued agents' verbal contents and vocal characteristics above their facial expressions. Based on the study findings, we draw several design guidelines and suggest avenues for future research.