Robotic learning on real hardware requires an efficient algorithm which minimizes the number of trials needed to learn an optimal policy. Prolonged use of hardware causes wear and tear on the system and demands more attention from an operator. To this end, we present a novel black-box optimization algorithm, Reward Optimization with Compact Kernels and fast natural gradient regression (ROCK star). Our algorithm immediately updates knowledge after a single trial and is able to extrapolate in a controlled manner. These features make fast and safe learning on real hardware possible. We have evaluated our algorithm on two simulated reaching tasks of a 50 degree-of-freedom robot arm and on a hopping task of a real articulated legged system. ROCK star outperformed current state-of-the-art algorithms in all tasks by a factor of three or more.