Spatial Task-Explicity Matters in Prompting Large Multimodal Models for Spatial Planning

Autor(en)
Ivan Majic, Zhangyu Wang, Krzysztof Janowicz, Mina Karimi
Abstrakt

The advance in large multimodal models (LMMs) gives rise to autonomous bots that perform complex tasks using human-like reasoning on their own. The ability of large models to understand spatial relations and perform spatial operations, however, is known to be limited. This gap hinders the development of autonomous GIS analysts, travel planning assistants, and other possibilities of spatial bots. In this paper, we explore the impact of modality on the performance of LMMs in spatial planning tasks-specifically, retrieving a target brick by first removing all other bricks on top of it. Experiments demonstrate that what matters is not only the modality of the prompts (text or image), but also how informative the spatial descriptions are for the LMMs to complete the task. We propose novel concepts of task-implicit and task-explicit spatial descriptions to qualitatively quantify the task-specific informativity of prompts. Furthermore, we develop simple techniques to increase the spatial task-explicity of image prompts, and the accuracy of spatial planning increases from 26% to 100% accordingly.

Organisation(en)
Institut für Geographie und Regionalforschung
Externe Organisation(en)
University of California, Santa Barbara
Seiten
99-105
Anzahl der Seiten
7
DOI
https://doi.org/10.1145/3687123.3698293
Publikationsdatum
11-2024
Peer-reviewed
Ja
ÖFOS 2012
507003 Geoinformatik, 102001 Artificial Intelligence, 102035 Data Science
Schlagwörter
ASJC Scopus Sachgebiete
Artificial Intelligence, Geography, Planning and Development
Link zum Portal
https://ucrisportal.univie.ac.at/de/publications/5fa8d97f-6428-48ef-87a9-f301db306386