Quick Search:

Vision-based urban navigation procedures for verbally instructed robots

Kyriacou, Theocharis ORCID logoORCID: https://orcid.org/0000-0002-5211-3686, Bugmann, Guido and Lauria, Stanislao (2005) Vision-based urban navigation procedures for verbally instructed robots. Robotics and Autonomous Systems, 51 (1). pp. 69-80.

Full text not available from this repository.

Abstract

When humans explain a task to be executed by a robot they decompose it into chunks of actions. These form a chain of search-and-act sensory-motor loops that exit when a condition is met. In this paper we investigate the nature of these chunks in an urban visual navigation context, and propose a method for implementing the corresponding robot primitives such as “take the nth turn right/left”. These primitives make use of a “short-lived” internal map updated as the robot moves along. The recognition and localisation of intersections is done in the map using task-guided template matching. This approach takes advantage of the content of human instructions to save computation time and improve robustness.

Item Type: Article
Status: Published
DOI: 10.1016/j.robot.2004.08.011
School/Department: York Business School
URI: https://ray.yorksj.ac.uk/id/eprint/13144

University Staff: Request a correction | RaY Editors: Update this record