Although kernel-based unsupervised learning methods, such as kernel k-means and kernel Principal Component Analysis (KPCA), have great potential for analyzing complex data, their practicality is limited by the high computational complexity. In this paper, we present novel techniques based on subspace approximation to accelerate kernel k-means and KPCA to reduce their Omega(kn^2) complexity. For kernel k-means, we reduce the per-iteration complexity down to O(n^(1+delta)) for any delta \in(0, 1), and prove that our algorithm`s per-iteration approximation of the distortion is within a factor of (1+n^(-delta) of the regular kernel k-means algorithm`s distortion. For KPCA, we lower the complexity down to O(kn(1+delta) + k^3), where k is the number of principal components, and prove that the reduced-size problem we solve is spectrally equivalent to the original KPCA problem. Through extensive experiments, we confirm that our algorithms are very fast compared to the original algorithms while maintaining good accuracy.