SLiM: Semantic Linking Maps for Active Visual Object Search. ICRA 2020. ICRA 2019 Best Workshop Paper.
Zhen Zeng, Adrian Röfer, Odest Chadwicke Jenkins
We aim for mobile robots to function in a variety of common human environments. Such robots need to be able to reason about the locations of previously unseen target objects. Landmark objects can help this reasoning by narrowing down the search space significantly. More specifically, we can exploit background knowledge about common spatial relations between landmark and target objects.
In this paper, we propose an active visual object search strategy method
through our introduction of the Semantic Linking Maps (SLiM) model. SLiM simultaneously maintains the belief over a target object’s location as well as landmark objects’ locations, while accounting for probabilistic inter-object spatial relations. Based on SLiM, we describe a hybrid search strategy that selects the next best view pose for searching for the target object based on the maintained belief. We demonstrate the effectiveness of our SLiM-based search strategy through comparative experiments in simulated environments. We further demonstrate the real-world applicability of SLiM-based search with a Fetch mobile manipulation robot.
Zhen Zeng, Yunwen Zhou, Odest Chadwicke Jenkins, Karthik Desingh
We present a filtering-based method (Contextual Temporal Mapping, or CT-Map) for semantic mapping to simultaneously detect objects and localize their 6 degree-of-freedom pose. CT-Map models semantic mapping problem through Conditional Random Field (CRF), while accounting for contextual relations between objects and temporal consistency of object poses, as well as a measurement potential. A particle filtering based algorithm is then proposed to perform inference in the CT-Map model. Our results demonstrate that CT-Map provides improved object detection and pose estimation with respect to baseline methods that treat observations as independent samples of a scene.
Zhen Zeng, Zheming Zhou, Zhiqiang Sui, Odest Chadwicke Jenkins
We present the Semantic Robot Programming (SRP) paradigm as a convergence of robot programming by demonstration and semantic mapping. In SRP, a user can declaratively program a robot manipulator by demonstrating a snapshot of their intended goal scene in workspace. The efficacy of SRP is demonstrated for the task of tray-setting with a Michigan Progress Fetch robot.
SUM: Sequential Scene Understanding and
Manipulation. IROS 2017.
Zhiqiang Sui, Zheming Zhou, Zhen Zeng, Odest Chadwicke Jenkins
In order to perform autonomous sequential manipulation tasks, perception in cluttered scenes remains a critical challenge for robots. We combine deterministic deep learning and probabilistic generative inference for robust sequential scene estimation and manipulation - Sequential Scene Understanding and Manipulation (SUM).
Zhen Zeng, Benjamin Kuipers
We aim to enable robot to learn object manipulation by imitation. Given external observations of demonstrations on object manipulations, we believe that two underlying problems to address in learning by imitation is
segment a demonstration into skills that can be individually learned
formulate the correct RL problem that only considers the relevant aspects of each skill so that the policy for each skill can be effectively learned
MRI Bias Field Correction Based on Tissue Labeling
EECS 556 Best Project Award
Lianli Liu*, Jiyang Chu*, Jie Li*, Zhen Zeng*
The Bias field in MR images mainly arises from imperfection in RF profiles, harms the performance of many image analysis algorithms. Traditional Segmentation based bias field correction methods are constrained by its low intensity-based segmentation accuracy.
We propose to integrate rich-feature segmentation in bias field estimation. A Laplacian regularization scheme is also designed to encourage the smoothness of the estimated bias field.
3D Articulated Structure from Motion
Given a video of human activities, the 3D skeleton structure of the person is reconstructed as a set of rigid bodies connected to each other through the joints.
Real-time Hand Gesture Recognition
Zhen Zeng*, Lu Hong*
The main contribution is the efficient prediction of the locations of fingertips.
Hand posture is first recognized by applying SVM to the features extracted from fingertips detection. Then Bayes Filtering is applied to update the probability of different gestures in the same category of postures.