The motive of this thesis is to practice code review in depth. It is generally said that the definition of ``review`` is to inspect work product for the purpose of finding defects by manual methods, such as inspection by human eyes, before the stage of ``test,`` and that review could be conducted by its developer or by the other party. However, in this thesis, the term ``review`` means ``to inspect the work product by the developer, not by the other party, using both traditional manual methods and automation tools.`` The targets of this code review is PMCenter, which was one of the MSE 2003-2004 studio projects, and the focus is on finding defects not only in view of development standards, i.e., design rule and naming rule, but also in view of quality attributes of PMCenter, i.e., performance and security.
From review results, a few lessons are teamed. First, defects which had not been found in the test stage of PMCenter development could be detected in this code review. These are hidden defects that affect system quality and that are difficult to find in the test. If the defects found in this code review had been fixed before the test stage of PMCenter development, productivity and quality enhancement of the studio project would have been improved. Second, manual review takes much longer than an automated one. In this code review, general check items were checked by the automation tool, while project-specific ones were checked by the manual method. If project-specific check items also could be checked by the automation tool, code review and verification work after fixing the defects would be conducted very efficiently. Reflecting on this idea, an evolution model of code review is studied, which eventually seeks fully automated review as an optimized code review.