Residential College | false |
Status | 已發表Published |
CrowdTelescope: Wi-Fi-positioning-based multi-grained spatiotemporal crowd flow prediction for smart campus | |
Zhang, Shiyu; Deng, Bangchao; Yang, Dingqi | |
2023-01 | |
Source Publication | CCF Transactions on Pervasive Computing and Interaction |
ISSN | 2524-521X |
Volume | 5Pages:31-44 |
Abstract | Crowd flow prediction is one of the key problems in human mobility modeling, forecasting crowd flows of locations based on historical human mobility traces. Traditional human mobility traces (collected via telecommunication companies, online social media platforms, or field studies/experiments, etc.) suffer from severe data quality issues such as low precision, data sparsity, and insufficient coverage. In this paper, we investigate crowd flow prediction using Wi-Fi connection records on the campus of a university, which imply comprehensive, large-scale, high-coverage, and multi-grained (building/floor/room level) human mobility traces. However, we are facing not only non-trivial noises in the raw Wi-Fi connection data when extracting human mobility traces, but also the trade-off between location granularities and mobility patterns when modeling multi-grained crowd flow. Against this background, we propose CrowdTelescope, a Wi-Fi-positioning-based multi-grained spatiotemporal crowd flow prediction framework. We design a systematic approach for robust human mobility trace extraction from the noisy Wi-Fi connection records and adopt spatiotemporal Graph Neural Networks to model multi-grained crowd flow under a unified graph model for the three-level location hierarchy. We also develop a prototype system of CrowdTelescope, providing the interactive visualization of crowd flows on campus. We evaluate CrowdTelescope by collecting a Wi-Fi connection dataset on the campus of the University of Macau. Results show that CrowdTelescope can effectively extract informative human mobility traces from the noisy Wi-Fi connection records with an improvement of 3.3% over baselines, and also accurately predict on-campus crowd flow across different location granularities with 1.5%- 24.1% improvements over baselines. |
Keyword | Crowd Flow Mobility Smart Campus Wi-fi Positioning |
DOI | 10.1007/s42486-022-00121-6 |
URL | View the original |
Indexed By | ESCI |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Artificial Intelligence ; Computer Science, Cybernetics ; Computer Science, Information Systems ; Computer Science, Interdisciplinary Applications |
WOS ID | WOS:000898478000001 |
Publisher | SPRINGERNATURE, CAMPUS, 4 CRINAN ST, LONDON N1 9XW, ENGLAND |
Scopus ID | 2-s2.0-85143788105 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Yang, Dingqi |
Affiliation | State Key Laboratory of Internet of Things for Smart City and Department of Computer and Information Science, University of Macau, Macau SAR, Macao |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Zhang, Shiyu,Deng, Bangchao,Yang, Dingqi. CrowdTelescope: Wi-Fi-positioning-based multi-grained spatiotemporal crowd flow prediction for smart campus[J]. CCF Transactions on Pervasive Computing and Interaction, 2023, 5, 31-44. |
APA | Zhang, Shiyu., Deng, Bangchao., & Yang, Dingqi (2023). CrowdTelescope: Wi-Fi-positioning-based multi-grained spatiotemporal crowd flow prediction for smart campus. CCF Transactions on Pervasive Computing and Interaction, 5, 31-44. |
MLA | Zhang, Shiyu,et al."CrowdTelescope: Wi-Fi-positioning-based multi-grained spatiotemporal crowd flow prediction for smart campus".CCF Transactions on Pervasive Computing and Interaction 5(2023):31-44. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment