TY - GEN
T1 - Analyzing Social Media Texts and Images to Assess the Impact of Flash Floods in Cities
AU - Basnyat, Bipendra
AU - Anam, Amrita
AU - Singh, Neha
AU - Gangopadhyay, Aryya
AU - Roy, Nirmalya
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/12
Y1 - 2017/6/12
N2 - Computer Vision and Image Processing are emerging research paradigms. The increasing popularity of social media, micro-blogging services and ubiquitous availability of high-resolution smartphone cameras with pervasive connectivity are propelling our digital footprints and cyber activities. Such online human footprints related with an event-of-interest, if mined appropriately, can provide meaningful information to analyze the current course and pre- A nd post-impact leading to the organizational planning of various real-time smart city applications. In this paper, we investigate the narrative (texts) and visual (images) components of Twitter feeds to improve the results of queries by exploiting the deep contexts of each data modality. We employ Latent Semantic Analysis (LSA)-based techniques to analyze the texts and Discrete Cosine Transformation (DCT) to analyze the images which help establish the cross-correlations between the textual and image dimensions of a query. While each of the data dimensions helps improve the results of a specific query on its own, the contributions from the dual modalities can potentially provide insights that are greater than what can be obtained from the individual modalities. We validate our proposed approach using real Twitter feeds from a recent devastating flash flood in Ellicott City near the University of Maryland campus. Our results show that the images and texts can be classified with 67% and 94% accuracies respectively.
AB - Computer Vision and Image Processing are emerging research paradigms. The increasing popularity of social media, micro-blogging services and ubiquitous availability of high-resolution smartphone cameras with pervasive connectivity are propelling our digital footprints and cyber activities. Such online human footprints related with an event-of-interest, if mined appropriately, can provide meaningful information to analyze the current course and pre- A nd post-impact leading to the organizational planning of various real-time smart city applications. In this paper, we investigate the narrative (texts) and visual (images) components of Twitter feeds to improve the results of queries by exploiting the deep contexts of each data modality. We employ Latent Semantic Analysis (LSA)-based techniques to analyze the texts and Discrete Cosine Transformation (DCT) to analyze the images which help establish the cross-correlations between the textual and image dimensions of a query. While each of the data dimensions helps improve the results of a specific query on its own, the contributions from the dual modalities can potentially provide insights that are greater than what can be obtained from the individual modalities. We validate our proposed approach using real Twitter feeds from a recent devastating flash flood in Ellicott City near the University of Maryland campus. Our results show that the images and texts can be classified with 67% and 94% accuracies respectively.
KW - Computer Vision
KW - Image Analysis
KW - Social Media Analytics
KW - Social Sensors
UR - https://www.scopus.com/pages/publications/85022325363
U2 - 10.1109/SMARTCOMP.2017.7946987
DO - 10.1109/SMARTCOMP.2017.7946987
M3 - Conference contribution
AN - SCOPUS:85022325363
T3 - 2017 IEEE International Conference on Smart Computing, SMARTCOMP 2017
BT - 2017 IEEE International Conference on Smart Computing, SMARTCOMP 2017
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE International Conference on Smart Computing, SMARTCOMP 2017
Y2 - 29 May 2017 through 31 May 2017
ER -