TY - GEN
T1 - PSM
T2 - 32nd ACM International Conference on Multimedia, MM 2024
AU - Khanal, Subash
AU - Xing, Eric
AU - Sastry, Srikumar
AU - Dhakal, Aayush
AU - Xiong, Zhexiao
AU - Ahmad, Adeel
AU - Jacobs, Nathan
N1 - Publisher Copyright:
© 2024 Owner/Author.
PY - 2024/10/28
Y1 - 2024/10/28
N2 - A soundscape is defined by the acoustic environment a person perceives at a location. In this work, we propose a framework for mapping soundscapes across the Earth. Since soundscapes involve sound distributions that span varying spatial scales, we represent locations with multi-scale satellite imagery and learn a joint representation among this imagery, audio, and text. To capture the inherent uncertainty in the soundscape of a location, we design the representation space to be probabilistic. We also fuse ubiquitous metadata (including geolocation, time, and data source) to enable learning of spatially and temporally dynamic representations of soundscapes. We demonstrate the utility of our framework by creating large-scale soundscape maps integrating both audio and text with temporal control. To facilitate future research on this task, we also introduce a large-scale dataset, GeoSound, containing over 300k geotagged audio samples paired with both low- and high-resolution satellite imagery. We demonstrate that our method outperforms the existing state-of-the-art on both GeoSound and the existing SoundingEarth dataset. Our dataset and code is available at https://github.com/mvrl/PSM.
AB - A soundscape is defined by the acoustic environment a person perceives at a location. In this work, we propose a framework for mapping soundscapes across the Earth. Since soundscapes involve sound distributions that span varying spatial scales, we represent locations with multi-scale satellite imagery and learn a joint representation among this imagery, audio, and text. To capture the inherent uncertainty in the soundscape of a location, we design the representation space to be probabilistic. We also fuse ubiquitous metadata (including geolocation, time, and data source) to enable learning of spatially and temporally dynamic representations of soundscapes. We demonstrate the utility of our framework by creating large-scale soundscape maps integrating both audio and text with temporal control. To facilitate future research on this task, we also introduce a large-scale dataset, GeoSound, containing over 300k geotagged audio samples paired with both low- and high-resolution satellite imagery. We demonstrate that our method outperforms the existing state-of-the-art on both GeoSound and the existing SoundingEarth dataset. Our dataset and code is available at https://github.com/mvrl/PSM.
KW - audio visual learning
KW - probabilistic representation learning
KW - soundscape mapping
UR - http://www.scopus.com/inward/record.url?scp=85209772175&partnerID=8YFLogxK
U2 - 10.1145/3664647.3681620
DO - 10.1145/3664647.3681620
M3 - Conference contribution
AN - SCOPUS:85209772175
T3 - MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
SP - 1361
EP - 1369
BT - MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 28 October 2024 through 1 November 2024
ER -