Research

overview & theory

I develop robot learning methods for contact-rich dexterous manipulation under uncertainty. My work focuses on learned policies that use vision, proprioception, and tactile feedback to grasp, stow, and manipulate objects when object pose, friction, contact modes, and target geometry are only partially observed. I am especially interested in risk-aware policy execution, and study how robots can estimate uncertainty from multiple sources (proprioceptive, perceptual, and tactile) during learned manipulation, recognize when a policy is entering a likely failure mode, and decide when to recover, replan, or gather more information. My work connects visuo-tactile policy learning, belief estimation, distributional reinforcement learning, and risk-sensitive control to build more robust systems for physical autonomy. I validate these ideas in MuJoCo, Isaac Sim, reinforcement-learning environments, and on real robot platforms equipped with anthropomorphic or tactile-sensorized dexterous hands.

Applications

I apply the aforementioned ideas to robot learning systems for:

  • Visuo-tactile dexterous grasping under uncertain object pose, friction, and contact modes
  • Risk-aware execution for learned manipulation policies that anticipate and recover from likely failures
  • Learning from uncertainty-aware expert demonstrations for contact-rich manipulation tasks
  • Safe robot navigation and control using distributional reinforcement learning and risk-sensitive planning

GraspIt Demo

Robust Dexterous Grasping

GraspIt Demo

Learned Manipulation Policy

Safety-Critical Control Demo

Safe Visuo-Tactile Manipulation

Demonstration of Risk-Aware QR-DQN in action

Risk-Aware Robot Learning

Publications

2026

arXiV
Clinton Enwerem, Shreya Kalyanaraman, John S. Baras, and Calin Belta, “Variational Neural Belief Parameterizations for Robust Dexterous Grasping under Multimodal Uncertainty,” arXiv preprint, 2026. Extended preprint with additional experiments, analysis, and discussion. arXiv link
arXiV
Clinton Enwerem, John S. Baras, and Calin Belta, “Risk-Constrained Belief-Space Optimization for Safe Control under Latent Uncertainty,” arXiv preprint, 2026. arXiv link
Peer-Reviewed
2025

CDC
Clinton Enwerem, Aniruddh G. Puranic, John S. Baras, and Calin Belta, Safety-Aware Reinforcement Learning for Control via Risk-Sensitive Value Iteration and Quantile Regression. In the proceedings of the 64th IEEE Conference on Decision and Control (CDC), 2025.
2024

CDC
Clinton Enwerem, Erfaun Noorani, John S. Baras, and Brian M. Sadler, Robust Stochastic Shortest-Path Planning via Risk-Sensitive Incremental Sampling, In the proceedings of the 63rd IEEE Conference on Decision and Control (CDC), 2024.
ECC
Clinton Enwerem and John S Baras. Safe Collective Control under Noisy Inputs and Competing Constraints via Non-Smooth Barrier Functions. In the 2024 European Control Conference (ECC), pp. 3762–3768. IEEE, 2024.
LCSS
Clinton Enwerem and John S. Baras, Formation Tracking for a Class of Uncertain Multiagent Systems: A Distributed Kalman Filtering Approach, IEEE Control Systems Letters, Volume 8, 2024.

2023

CoDIT
Clinton Enwerem and John S. Baras, "Consensus-Based Leader-Follower Formation Tracking for Control-Affine Nonlinear Multiagent Systems," in the 9th International Conference on Control, Decision and Information Technologies (CoDIT), Rome, Italy: IEEE, Jul. 2023, pp. 1226–1231. doi: 10.1109/CoDIT58514.2023.10284199.
2023

arXiV
Clinton Enwerem, John S. Baras, and Danilo Romero, "Distributed Optimal Formation Control for an Uncertain Multiagent System in the Plane," arXiv:2301.05841 [cs, eess], Jan. 2023.