​© 2023 by INDUSTRIAL DESIGN. Proudly created with Wix.com

  • LinkedIn Social Icon
  • c-facebook
  • Google+ Social Icon

Research

Generalized Object Permanence for Object Retrieval
through Semantic Linking Maps

Zhen Zeng, Adrian Röfer, Shiyang Lu, Odest Chadwicke Jenkins

When operating at the building scale, mobile robots will not be able to directly observe many of the objects it may need to complete a given task. We describe the Generalized Object Permanence (GOP) problem as the prediction of a target object’s semantic location (e.g. “on sofa”, “in sink”). We represent the GOP problem through Semantic Linking Map (SLiM) factor graph model. SLiM maintains beliefs over all inter-object spatial relations by considering: long-term occurrence history, short-term recent observations, and contextual relations from prior common sense knowledge. We demonstrate the efficacy of SLiM object permanence through a multi-room object search task using the Michigan Progress Fetch robot.

Paper: ICRA 19' Workshop (Best Workshop Paper Award),  IROS 19' (Under review)

Semantic Mapping with Simultaneous Object Detection and Localization

Zhen Zeng, Yunwen Zhou, Odest Chadwicke Jenkins, Karthik Desingh

We present a filtering-based method (Contextual Temporal Mapping, or CT-Map) for semantic mapping to simultaneously detect objects and localize their 6 degree-of-freedom pose. CT-Map models semantic mapping problem through Conditional Random Field (CRF), while accounting for contextual relations between objects and temporal consistency of object poses, as well as a measurement potential. A particle filtering based algorithm is then proposed to perform inference in the CT-Map model. Our results demonstrate that CT-Map provides improved object detection and pose estimation with respect to baseline methods that treat observations as independent samples of a scene.

Paper: IROS 18', Arxiv

Semantic Robot Programming for Goal-Directed Manipulation in Cluttered Scenes

Zhen Zeng, Zheming Zhou, Zhiqiang Sui, Odest Chadwicke Jenkins

We present the Semantic Robot Programming (SRP) paradigm as a convergence of robot programming by demonstration and semantic mapping. In SRP, a user can declaratively program a robot manipulator by demonstrating a snapshot of their intended goal scene in workspace. The efficacy of SRP is demonstrated for the task of tray-setting with a Michigan Progress Fetch robot.

Paper: Arxiv, ICRA 18', poster, RSS Workshop 17'

Sequential Scene Understanding and Manipulation

Zhiqiang Sui, Zheming Zhou, Zhen Zeng, Odest Chadwicke Jenkins

 

In order to perform autonomous sequential manipulation tasks, perception in cluttered scenes remains a critical challenge for robots. We propose a probabilistic approach for robust sequential scene estimation and manipulation - Sequential Scene Understanding and Manipulation (SUM). 

Paper: ArxivIROS 17'

Object Manipulation Learning by Imitation

Zhen Zeng, Benjamin Kuipers

We aim to enable robot to learn object manipulation by imitation. Given external observations of demonstrations on object manipulations, we believe that two underlying problems to address in learning by imitation is

  • segment a demonstration into skills that can be individually learned

  • formulate the correct RL problem that only considers the relevant aspects of each skill so that the policy for each skill can be effectively learned

Details: Arxiv, Video

MRI Bias Field Correction Based on Tissue Labeling

 

Bias field in MR images, mainly arises from imperfection in RF profiles, harms the performance of many image analysis algorithms. Traditional Segmentation based bias field correction methods is constrained by its low intensity based segmentation accuracy.

 

We propose to integrate rich-feature segmentation in bias field estimation. A Laplacian regularization scheme is also designed to encourage the smoothness of estimated bias field. 

Details: pdf, slides

​3D Articulated Structure from Motion

 

Given a video of human activities, the 3D skeleton structure of the person is reconstructed as a set of rigid bodies connected to each other through the joints.

 

Details: pdf

Real-time Hand Gesture Recognition

 

The main contribution is the efficient prediction of the locations of fingertips.

 

Hand posture is first recognized by applying SVM to the features extracted from fingertips detection. Then Bayes Filtering is applied to update the probability of different gestures in the same category of postures. 

 

Details: pdf, slides, video