The purpose of this research is to design and develop a new alternative input method, called HF Interface, for selecting and manipulating on-screen objects based on tracking the user’s head pose and recognizing his/her facial expressions, using a 3D depth sensing camera. To specify a target of interest, the HF interface detects changes in the user's head position and orientation and the detected angular deflection and velocity determines the magnitude and direction of change in the mouse cursor’s position on the screen. To specify a desired pointing command with the HF interface, eye-blinks or lip movements are used (e.g., blinking left eye for left click; blinking right eye for right click; double blinking for double click; opening mouse for drag). In the pilot experimental tests conducted with individuals with spinal cord injury, the HF interface demonstrated that it holds good potential as another viable pointing method by consistently showing faster and more accurate performance than the dwell clicking interface across all pointing operations. As a specific clinical application, HF interface is expected to be useful for both computer access and augmentative communication software.