DeepMV Captures Multiple Wi-Fi, Ultrasonic Data Sources to Accurately Classify Human Activity
Designed to sense activity without the subject needing to carry a dedicated device, DeepMV is more accurate than existing solutions.
Researchers at the State University of New York at Buffalo, Pennsylvania State University, the University of Illinois Urbana-Champaign, the University of Virginia, and JD Intelligent Cities Research have unveiled a deep-learning platform designed to recognise human activity by analysing signals from multiple wireless network devices β and without the monitored individuals needing to carry any hardware of their own.
"Recently, significant efforts are made to explore device-free human activity recognition techniques that utilize the information collected by existing indoor wireless infrastructures without the need for the monitored subject to carry a dedicated device," the team explains. "Most of the existing work, however, focuses their attention on the analysis of the signal received by a single device."
"In practice, there are usually multiple devices 'observing' the same subject. Each of these devices can be regarded as an information source and provides us a unique 'view' of the observed subject. Intuitively, if we can combine the complementary information carried by the multiple views, we will be able to improve the activity recognition accuracy."
The team's proposal for just such a combined analysis: DeepMV, a multi-view deep-learning system for device-free human activity recognition. DeepMV is designed to capture reflected signals from Wi-Fi networks and/or ultrasonics generated by an Apple iPad Mini 4 and captured on Huawei Nexus 6P smartphones, pre-process the data, then run it through a deep-learning network which outputs inferred activities.
The DeepMV system is configured to recognise a series of activity types: wiping the table, typing, writing, rotating a chair, moving a chair, walking, cleaning the floor, running in place, and "NULL" β a catch-all for an activity which was sensed but not recognised. The results proved promising: "Experimental results on the collected activity datasets show that DeepMV can achieve better results than the state-of-the-art device-free human activity recognition approaches," the team concludes, "and hence justify the effectiveness of our proposed DeepMV model for the task of human activity recognition."
The paper has been released under open-access terms as part of the ACM International Joint Conference on Pervasive and Ubiquitous Computing 2020 (UbiComp '20).