Hi! I’m a Ph.D. student at The Robotics Institute, Carnegie Mellon University, where I’m fortunate to be advised by Prof. Oliver Kroemer.
My research focuses on building intelligent robotic systems that can learn complex skills and generalize them to new environments with zero to minimal supervision.
I develop algorithms for robot learning
, with an emphasis on skill acquisition and transfer,
action-effect prediction, affordance understanding, and learning from demonstration.
I'm particularly interested in combining robot manipulation
with deep learning,
perception, foundational models, LLM/VLMs,
symbolic reasoning and data-efficient optimization techniques to enable robots to adapt quickly and robustly to real-world scenarios.
Ultimately, my goal is to bridge the gap between low-level control and high-level reasoning— empowering robots to understand, plan, and act in ways that are as versatile and intuitive as humans.
Hi! I am a Ph.D. student at The Robotics Institute, Carnegie Mellon University working with Prof. Oliver Kroemer. I focus on robot learning, including skill transfer, affordances, and foundational models for intelligent manipulation. My goal is to help robots learn complex tasks with minimal supervision and generalize them to new scenarios.
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
This project introduces Grounded Task-Axes (GTA), a novel framework for enabling zero-shot robotic skill transfer by modularizing robot actions into interpretable and reusable low-level controllers. Each controller is grounded using object-centric keypoints and axes, allowing robots to align and execute skills across novel tools and scenes without any training.
This project bridges traditional control theory with modern visual reasoning, offering interpretable, adaptable, and sample-efficient (zero-shot) skill transfer for real-world manufacturing and manipulation tasks.
M. Yunus Seker, Shobhit Aggarwal, Oliver Kroemer
Humanoids 2025 - Under Review
Grounded Task Axes (GTA) introduces a zero-shot skill transfer framework that enables robots to generalize manipulation tasks to unseen objects by grounding modular controllers (like position, force, and orientation) using vision foundation models. It allows robots to perform complex, multi-step tasks—such as scraping, pouring, or inserting—without training or fine-tuning, by matching semantic keypoints between objects.
M. Yunus Seker, Shobhit Aggarwal, Oliver Kroemer
Humanoids 2025 - Under Review
Grounded Task Axes (GTA) introduces a zero-shot skill transfer framework that enables robots to generalize manipulation tasks to unseen objects by grounding modular controllers (like position, force, and orientation) using vision foundation models. It allows robots to perform complex, multi-step tasks—such as scraping, pouring, or inserting—without training or fine-tuning, by matching semantic keypoints between objects.
M. Yunus Seker, Oliver Kroemer
[Paper] [ArXiv] [Video] [Presentation]
IROS 2024 - Accepted
This paper presents a framework that optimizes robotic actions by choosing between multiple predictive models—analytical, learned, and simulation-based—based on context. Using Model Deviation Estimators (MDEs), the robot selects the most reliable model to quickly and accurately predict outcomes. The introduction of sim-to-sim MDEs enables faster optimization and smooth transfer to real-world tasks through fine-tuning.
M. Yunus Seker, Oliver Kroemer
[Paper] [ArXiv] [Video] [Presentation]
IROS 2024 - Accepted
This paper presents a framework that optimizes robotic actions by choosing between multiple predictive models—analytical, learned, and simulation-based—based on context. Using Model Deviation Estimators (MDEs), the robot selects the most reliable model to quickly and accurately predict outcomes. The introduction of sim-to-sim MDEs enables faster optimization and smooth transfer to real-world tasks through fine-tuning.
M. Yunus Seker, Oliver Kroemer
[Paper] [ArXiv] [Video] [Presentation]
ICRA 2024 - Accepted
This paper introduces a Bayesian optimization framework to estimate object material properties from observed interactions. By modeling each observation independently and focusing only on relevant object parameters, the method achieves faster, more generalizable optimization. It further improves efficiency through partial reward evaluations, enabling robust and incremental learning across diverse real-world scenes.
M. Yunus Seker, Oliver Kroemer
[Paper] [ArXiv] [Video] [Presentation]
ICRA 2024 - Accepted
This paper introduces a Bayesian optimization framework to estimate object material properties from observed interactions. By modeling each observation independently and focusing only on relevant object parameters, the method achieves faster, more generalizable optimization. It further improves efficiency through partial reward evaluations, enabling robust and incremental learning across diverse real-world scenes.
M. Yunus Seker, Mert Imre, Justus Piater, Emre Ugur
RSS 2019 - Accepted
Conditional Neural Movement Primitives (CNMPs) are a learning-from-demonstration framework that enables robots to generate and adapt complex movement trajectories based on external goals and sensor feedback. Built on Conditional Neural Processes (CNPs), CNMPs learn temporal sensorimotor patterns from demonstrations and produce joint or task-space motions conditioned on goals and real-time sensory input. Experiments show CNMPs can generalize from few or many demonstrations, adapt to factors like object weight or shape, and react to unexpected changes during execution.
M. Yunus Seker, Mert Imre, Justus Piater, Emre Ugur
RSS 2019 - Accepted
Conditional Neural Movement Primitives (CNMPs) are a learning-from-demonstration framework that enables robots to generate and adapt complex movement trajectories based on external goals and sensor feedback. Built on Conditional Neural Processes (CNPs), CNMPs learn temporal sensorimotor patterns from demonstrations and produce joint or task-space motions conditioned on goals and real-time sensory input. Experiments show CNMPs can generalize from few or many demonstrations, adapt to factors like object weight or shape, and react to unexpected changes during execution.