Preprint
Article

This version is not peer-reviewed.

Multimodal Control of Manipulators: Coupling Kinematics and Vision for Self-Driving Laboratory Operations

Submitted:

02 December 2025

Posted:

04 December 2025

You are already at the latest version

Abstract
Self-driving laboratories are redefining autonomous experimentation by integrating robotic manipulation, computer vision, and intelligent planning to accelerate scientific discovery. This work presents a vision-guided motion planning framework for robotic manipulators operating in dynamic laboratory environments, with a focus on evaluating motion smoothness and control stability. The framework enables autonomous detection, tracking, and interaction with textured objects through a hybrid scheme that couples advanced motion planning algorithms with real-time visual feedback. Kinematic modeling of the manipulator is carried out using the screw theory formulations, which provides a rigorous foundation for deriving forward kinematics and the space Jacobian. These formulations are further employed to compute inverse kinematic solutions via the Damped Least Squares (DLS) method, ensuring stable and continuous joint trajectories even in the presence of redundancy and singularities. Motion trajectories toward target objects are generated using the RRT* algorithm, offering optimal path planning under dynamic constraints. Object pose estimation is achieved through a vision pipeline that integrates feature-based detection with homography-driven depth analysis, enabling adaptive tracking and dynamic grasping of textured objects. The manipulator’s performance is quantitatively evaluated using smoothness metrics, RMSE pose errors, and joint motion profiles including velocity continuity, acceleration, jerk, and snap. Simulation studies demonstrate the robustness and adaptability of the proposed framework in autonomous experimentation workflows, highlighting its potential to enhance precision, scalability, and efficiency in next-generation self-driving laboratories.
Keywords: 
;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated