Spatial Task-Explicity Matters in Prompting Large Multimodal Models for Spatial Planning

Author(s)
Ivan Majic, Zhangyu Wang, Krzysztof Janowicz, Mina Karimi
Abstract

The advance in large multimodal models (LMMs) gives rise to autonomous bots that perform complex tasks using human-like reasoning on their own. The ability of large models to understand spatial relations and perform spatial operations, however, is known to be limited. This gap hinders the development of autonomous GIS analysts, travel planning assistants, and other possibilities of spatial bots. In this paper, we explore the impact of modality on the performance of LMMs in spatial planning tasks-specifically, retrieving a target brick by first removing all other bricks on top of it. Experiments demonstrate that what matters is not only the modality of the prompts (text or image), but also how informative the spatial descriptions are for the LMMs to complete the task. We propose novel concepts of task-implicit and task-explicit spatial descriptions to qualitatively quantify the task-specific informativity of prompts. Furthermore, we develop simple techniques to increase the spatial task-explicity of image prompts, and the accuracy of spatial planning increases from 26% to 100% accordingly.

Organisation(s)
Department of Geography and Regional Research
External organisation(s)
University of California, Santa Barbara
Pages
99-105
No. of pages
7
DOI
https://doi.org/10.1145/3687123.3698293
Publication date
11-2024
Peer reviewed
Yes
Austrian Fields of Science 2012
507003 Geoinformatics, 102001 Artificial intelligence, 102035 Data science
Keywords
ASJC Scopus subject areas
Artificial Intelligence, Geography, Planning and Development
Portal url
https://ucrisportal.univie.ac.at/en/publications/5fa8d97f-6428-48ef-87a9-f301db306386