The widespread of Autonomous Vehicle (AV) promises a transportation system revolution by significantly reducing vehicle crashes. Despite potential benefits, a social challenge of how AVs should behave during unavoidable crashes is actively discussed by various stakeholders (e.g., researchers, company representatives, engineers) in parallel. As human drivers will delegate control to AVs, considering moral implications before AVs are deployed is essential for forming a trust for the general public. However, there is a considerable gap in the perceptions of AV ethics, as this is an emerging technology without may empirical studies on the moral perception of humans with ethical implications. Additionally, the current discussion of AV morality is still vague and needs to be translated into a machine operationalizable ethics code. Moreover, engagement of the public is crucial as they will be the core users of AVs who will face emotional salience when accidents occur. In this dissertation, two phases of research using a mixed-methodology were conducted to verify AV ethics that is transparent, predictable, and acceptable. In the first phase, a conceptual framework of AV Ethical DecisionMaking model that explains how individual and contextual factors affect potential AV users’ ethical perceptions in AV moral dilemma was developed. In the second phase, a cross-cultural comparison study (collectivist culture: Korea vs. individualist culture: Canada) using a sequential-mixed method by adopting in-depth interviews to define AV moralities that align with societal values was conducted. The significant contribution of this dissertation is that the study discovered moral decisions making patterns from the human perspectives, which could be applicable for designing realistic AV moral behaviors. These lessons are essential to inform policymakers, AV developers, insurance companies, and the public, which will ultimately prepare the overall society in building socially acceptable AVs.