Electromyography (EMG) is a technique used to measure and record the electrical activity produced by skeletal muscles. It provides valuable insights into muscle function, activation patterns, and overall neuromuscular activity. EMG is commonly utilized in various fields, including medical diagnostics, virtual reality, sports science, and human-computer interfaces.
To capture EMG measurements, small electrodes are placed on the skin overlying the targeted muscles. These electrodes detect and amplify the electrical signals generated by muscle fibers during contraction and relaxation. The recorded signals, known as electromyograms, display the electrical activity as a waveform. EMG signals can be analyzed to determine muscle activation levels, timing, coordination, and fatigue.
The applications of EMG are diverse and impactful. In the field of human-computer interfaces, EMG enables users to control devices and interact with computers using muscle signals. It helps assess and monitor muscle function in patients with neuromuscular disorders or those recovering from injuries or surgeries. EMG sensors have also been integrated into VR systems so that users can experience more immersive and interactive virtual environments.
Since EMG sensors can also be compact and unobtrusive, they seem like a nearly ideal platform for a great many use cases. So then, why are there relatively few applications of this technology in commercial devices today? A major reason is that the signals produced by each individual can vary wildly based on many biological factors, like body fat percentage, skin conditions, age, and fatigue levels. This means that the algorithms that interpret EMG signals must go through complex and time-consuming calibration processes before they are used for the first time.
Researchers at the City University of Hong Kong have developed a deep learning-based framework called EMGSense that can accurately calibrate EMG sensing systems without the pain experienced with existing technologies. EMGSense is a low-effort framework that leverages self-supervision and self-training to effectively deal with biological differences between individuals and accurately measure EMG signals.
The initial deep learning model was trained on a body of EMG data from a variety of users of a sensing system. This gives the model a somewhat generalized understanding of what EMG signals look like across a wide range of individuals. A small amount of unlabeled data is then collected from a new user of the system to fine-tune the model for their specific biological parameters.
The training process for EMGSense takes a two-pronged approach in which user-specific features are first removed from the training space. This has the effect of making the knowledge that the model encodes more transferable between different users. After this, EMG data from a new user of the system is collected and leveraged to learn their user-specific biological features to enable high-performance EMG sensing. Unlabeled data can also be collected over time to ensure good long-term performance of the system, even in the face of the time-varying nature of EMG signals.
EMGSense was validated in a series of experiments conducted with 13 participants. An EMG sensor was built into a device designed to do both gesture recognition and activity recognition. Average accuracy rates of 91.9% and 81.2% were observed in these tasks, respectively. The framework was found to outperform other existing EMG sensing adaptation approaches by about 12% to 17%, and was even shown to perform comparably with techniques leveraging supervised learning.
The methods described by the researchers have the potential to open up the world of EMG sensing to a much wider audience, and that is good for all of us. We could be seeing many more novel, interesting interfaces appear in the future for everything from entertainment and productivity to healthcare.