Abstract: This paper uses previous results to build a principled framework for efficiently solving partially observable multi-agent navigation problems with optimal deterministic planners. Under this framework, computational tractability is preserved by trading global optimality for the preservation of collision avoidance guarantees. The framework accomplishes this by decomposing the navigation problem into decoupled deterministic collision avoidance and non-deterministic guidance problems. An example solution is derived and tested for a novel graph traversal problem using a deterministic, single-agent velocity profile planner in a partially observable, multi-agent setting.
[PDF] [PNT DEMO] [SLIDES] [PRESENTATION]
Abstract: A reciprocal dance occurs when two mobile agents attempt to pass each other but a lack of coordination results in repeated attempts to take mutually incompatible actions. Often, such a situation simply results in deadlock. But in systems with significant inertial constraints, it can result in collision. This paper presents this colliding variant of the reciprocal dance, how it arises, and a mitigation strategy that improves safety without sacrificing flexibility. A demonstration of the concept is provided in the context of automotive active safety.
[PDF] [EXTENDED ABSTRACT] [DATA SET] [SLIDES] [PRESENTATION]
Abstract: This paper presents a vision-based control frame- work that attempts to mitigate several shortcomings of current approaches to mobile navigation, including the requirement for detailed 3D maps. The framework defines potential fields in image space and uses a subsumption process to combine hard, physical constraints with soft, guidance constraints while guaranteeing that hard constraint information is preserved. In addition, this representation can be defined with constant size, which can enable strong run-time guarantees to be made for visual servoing- based control. The framework is demonstrated with proof-of- concept examples in simulation and the real world, as well as data sets and an open source implementation.
[PDF] [POSTER] [BIB] [In IEEE CAVS 2018]
This guest talk was given Dec. 11, 2017 as part of the Intelligent & Interactive Systems Talk Series at Indiana University.
Abstract: In industries as varied as mining, agriculture, health care, and automated driving, many practical applications in robotics involve interacting with intelligent agents while navigating dynamic environments. While impressive results have been demonstrated in these domains, there are still basic types of interacting navigation problems for which robust and general solutions have remained elusive. One such problem type is efficient navigation in the presence of non-cooperative and non-adversarial agents. This is the kind of problem pedestrians face when navigating crowded sidewalks or drivers face when navigating crowded roadways. Two primary reasons for difficulties addressing this problem are that the problem models used tend to exhibit prohibitive computational complexity and the problem formulations tend to have difficult-to-satisfy requirements for problem input and representations. This talk will present recent work that provides more efficient problem models for this problem, as well as new, vision-based problem formulations that seek to significantly simplify problem input and representation requirements.
Abstract: This technical report presents an environment representation for use in vision-based navigation. The representation has two useful properties: 1) it has constant size, which can enable strong run-time guarantees to be made for control algorithms using it, and 2) it is structurally similar to a camera image space, which effectively allows control to operate in the sensor space rather than employing difficult, and often inaccurate, projections into a structurally different control space (e.g. Euclidean). The presented representation is intended to form the basis of a vision-based subsumption control architecture.
Abstract Only: A computationally efficient monocular encroachment detection technique is presented, and a proof of concept is implemented on a low-cost mobile robot platform. This is an extended version of an abstract submitted to IROS 2017.
Abstract: This paper presents a preliminary conceptual investigation into an environment representation that has constant space complexity with respect to the camera image space. This type of representation allows the planning algorithms of a mobile agent to bypass what are often complex and noisy transformations between camera image space and Euclidean space. The approach is to compute per-pixel potential values directly from processed camera data, which results in a discrete potential field that has constant space complexity with respect to the image plane. This can enable planning and control algorithms, whose complexity often depends on the size of the environment representation, to be defined with constant run-time. This type of approach can be particularly useful for platforms with strict resource constraints, such as embedded and real-time systems.