Self-driving laboratories are redefining autonomous experimentation by integrating robotic manipulation, computer vision, and intelligent planning to accelerate scientific discovery. This work presents a vision-guided motion planning framework for robotic manipulators operating in dynamic laboratory environments, with a focus on evaluating motion smoothness and control stability. The framework enables autonomous detection, tracking, and interaction with textured objects through a hybrid scheme that couples advanced motion planning algorithms with real-time visual feedback. Kinematic modeling of the manipulator is carried out using the screw theory formulations, which provides a rigorous foundation for deriving forward kinematics and the space Jacobian. These formulations are further employed to compute inverse kinematic solutions via the Damped Least Squares (DLS) method, ensuring stable and continuous joint trajectories even in the presence of redundancy and singularities. Motion trajectories toward target objects are generated using the RRT* algorithm, offering optimal path planning under dynamic constraints. Object pose estimation is achieved through a vision pipeline that integrates feature-based detection with homography-driven depth analysis, enabling adaptive tracking and dynamic grasping of textured objects. The manipulator’s performance is quantitatively evaluated using smoothness metrics, RMSE pose errors, and joint motion profiles including velocity continuity, acceleration, jerk, and snap. Simulation studies demonstrate the robustness and adaptability of the proposed framework in autonomous experimentation workflows, highlighting its potential to enhance precision, scalability, and efficiency in next-generation self-driving laboratories.