Self-organizing feature map is one of the most widely used neural network paradigm based on unsupervised competitive learning. However the learning algorithm originally introduced by Kohonen is very slow when the size of the map is not trivially small. The slowness is mainly caused by the search process for a large map required in each training step of the learning.
Many works from recent studies of related area have focused on improvements of the learning process. Some of them focused on reduction of search space of the learning algorithm, while others on re-structuring of self-organizing neural network to make search process faster.
In this thesis, $L^*$ learning, a fast learning algorithm based on incremental ordering is proposed. We start with only a few units evenly distributed on a large topological feature map, and gradually increase the number of units until it covers the entire map. In the middle phases of learning, some units are well ordered and others are not, while all units are weakly ordered in Kohonen algorithm. The ordered units, during $L^*$ learning, help to accelerate the search speed of the algorithm and accelerate the movements of the remaining unordered units to their topological locations.
It is shown by the theoretical analysis as well as an experimental analysis that the proposed learning algorithm reduces the training time from O($M^2$) to O(logM) for the M by M self-organizing feature map without any additional working space, while preserving the ordering properties of the Kohonen learning algorithm.