Technologies for hands-free user interaction include a wearable computing device having an audio sensor. The audio sensor generates audio input data, and the wearable computing device detects one or more teeth-tapping events based on the audio input data. Each teeth-tapping event corresponds to a sound of a user contacting two or more of the user's teeth together. The wearable computing device performs a user operation in response to detection of the teeth-tapping events. The audio sensor may be a microphone or a bone conductance sensor. The wearable computing device may include two or more audio sensors to generate positional audio input data. The wearable computing device may identify a teeth-tapping command and select the user interface operation based on the identified command. The teeth-tapping command may identify a tap position or a tap pattern associated with the one or more teeth-tapping events. Other embodiments are described and claimed.
展开▼