Preprint
Article

This version is not peer-reviewed.

Deep Reinforcement Learning Based Robotic Arm’s Target Reaching Performance Enhancement

A peer-reviewed article of this preprint also exists.

Submitted:

26 January 2025

Posted:

26 January 2025

You are already at the latest version

Abstract

This work presents the implementation of Deep Deterministic Policy Gradient (DDPG) algorithm to enhance target reaching capability of the seven Degree-of-Freedom (7-DoF) Franka Panda robot arm. A simulated environment is established by employing OpenAI Gym, PyBullet, and Panda Gym. Upon completion of 100,000 training time steps, the DDPG algorithm attains a success rate of 100% and an average reward of -1.8. The actor loss and critic loss values are 0.0846 and 0.00486, respectively, indicating improved decision-making and accurate value function estimations. The simulation results demonstrate the efficiency of DDPG in improving robotic arm performance, highlighting its potential for application to improve robot arm manipulation.

Keywords: 
;  ;  ;  ;  

1. Introduction

The enhancement of precision for robot arm manipulation remains a core research area for acquiring full autonomy of robots in various sectors such as industrial manufacturing and assembly processes. The next logical progression in this field is to achieve complete autonomy of robotic manipulators through the use of machine learning (ML), artificial neural networks (ANNs), and artificial intelligence (AI) as a whole, given the maturity of image recognition and vision systems [1,2]. Numerous attempts have been made to create intelligent robots that can take tasks and execute accordingly [3,4,5,6,7,8,9,10].
Although developing a system with intelligence close to that of humans is still a long way off, robots that can perform specialized autonomous activities, such as intelligent facial emotion recognition [11], fly in natural and man-made environments [12], drive a vehicle [13], swim [14], carry boxes and material in different terrains [15], and pick up and place objects [16,17] is already actualized.
However, some challenges must be overcome to achieve this goal. For instance, the mapping complexity from Cartesian space to the joint space of a robot arm increases with the number of joints and linkages that the manipulator has. This is problematic because the tasks assigned to a robotic arm are in Cartesian space, whereas the commands (velocity or torque) are in joint space [18,19]. Therefore, if full autonomy of robotic manipulators is the objective, the target-reaching problem is probably one of the most crucial factors that must be addressed.
The field of reinforcement learning, as described in [20,21], is a type of machine learning that aims to maximize the outcome of a given system using a dynamic and autonomous trial-and-error approach. It shares a similar objective with human intelligence, which is characterized by the ability to perceive and retain information as knowledge to be used for environment-adaptive behaviors. Central to the reinforcement learning framework are trial-and-error search and delayed rewards, which allow the learning strategy to interact with the environment by performing actions and discovering rewards [22]. Through this approach, software agents and machines can automatically select the most effective course of action to take in a given circumstance, thus improving performance. Reinforcement learning offers a framework and set of tools for designing sophisticated and challenging-to-engineer behaviors in robotics [23,24]. In contrast, the challenges presented by robotic issues serve as motivation, impact, and confirmation of advances in reinforcement learning. Multiple previous works on the implementation of reinforcement learning in the field of robotics depict this fact [25,26,27,28,29].

2. Modeling of Robotic Arm

2.1. Direct Kinematic Model of Robot Arm

The rotation matrices in the DH coordinate frame represent the rotations about the X and Z axes. The rotation matrices for these axes are, respectively, given as:
R x = 1 0 0 0 C δ i S δ i 0 S δ i C δ i
and
R z = C ϕ i S ϕ i 0 S ϕ i C ϕ i 0 0 0 1
The homogeneous transformation matrix ( T i 1 i ) that accounts for rotation and translation is given as:
T i 1 i = R i 1 1 p i 1 1 0 1 = Rot x ( ϕ i ) · trans x ( d i ) · trans z ( Δ i ) · Rot z ( a i )
T i 1 1 = C ϕ i S ϕ i C δ i S ϕ i S δ i a i C ϕ i S ϕ i C ϕ i C δ i C ϕ i S δ i a i S ϕ i 0 S δ i C δ i d i 0 0 0 1
where:
  • The rotation matrix ( R i 1 1 ) represents the orientation of the i-th frame relative to the ( i 1 ) -th frame.
  • P i 1 1 represents the center of the link frame with components ( P x , P y , and P z ).

2.1.1. DH Axis Representation

The four DH parameters describe the translation and rotation relationship between two consecutive coordinate frames as follows:
  • d: a distance between the current frame and the previous frame along the Z-axis,
  • ( ϕ ): an angle between the X-axis of the previous frame and the X-axis of the current frame about the previous z-axis,
  • a: a distance between the Z-axes of the current and previous frames.
  • δ : an offset of the previous frame from the current frame along the Z-axis of the current frame.
The DH parameters for the Franka Panda robot, shown in Figure 1 are given in Table 1. From the above DH parameters and based on the homogeneous transformation matrix 3, the transformation matrix for the Franka Panda robot is derived as.
T i 1 1 = C ϕ i S ϕ i C δ i S ϕ i S δ i a i C ϕ i S ϕ i C ϕ i C δ i C ϕ i S δ i a i S ϕ i 0 S δ i C δ i d i 0 0 0 1
T 0 1 = C ϕ 1 S ϕ 1 0 0 S ϕ 1 C ϕ 1 0 0 0 0 1 0.33 0 0 0 1
T 1 2 = C ϕ 2 0 S ϕ 2 0 S ϕ 2 0 C ϕ 2 0 0 1 0 0 0 0 0 1
T 2 3 = C ϕ 3 0 S ϕ 3 0 S ϕ 3 0 C ϕ 3 0 0 1 1 0.316 0 0 0 1
T 3 4 = C ϕ 4 0 S ϕ 4 0.0825 C ϕ 4 S ϕ 4 0 C ϕ 4 0.0825 S ϕ 4 0 1 1 0 0 0 0 1
T 4 5 = C ϕ 5 0 S ϕ 5 0.0825 C ϕ 5 S ϕ 5 0 C ϕ 5 0.0825 S ϕ 5 0 1 0 0.384 0 0 0 1
T 5 6 = C ϕ 6 0 S ϕ 6 0 S ϕ 6 0 S ϕ 6 0 0 1 0 0 0 0 0 1
T 6 7 = C ϕ 7 0 S ϕ 7 0.088 C ϕ 7 S ϕ 7 0 C ϕ 7 0.088 S ϕ 7 0 1 0 0 0 0 0 1
T 7 0 = T 0 1 · T 1 2 · T 2 3 · T 3 4 · T 4 5 · T 5 6 · T 6 7
T 7 0 = r 11 r 12 r 13 p x r 21 r 22 r 23 p y r 31 r 32 r 33 p z 0 0 0 1
The orientation and position of the end-effector, respectively, are given by:
R 11 = S ϕ 1 S ϕ 3 C ϕ 2 + C ϕ 1 C ϕ 3 C ϕ 4
R 12 = S ϕ 1 C ϕ 2 C ϕ 3 C ϕ 4 S ϕ 3 C ϕ 1
R 13 = S ϕ 2 C ϕ 1
R 21 = S ϕ 1 C ϕ 3 C ϕ 4 + S ϕ 3 C ϕ 1 C ϕ 2
R 22 = S ϕ 1 S ϕ 3 C ϕ 2 + C ϕ 1 C ϕ 3 C ϕ 4
R 23 = S ϕ 2 S ϕ 1
R 31 = S ϕ 2 C ϕ 3 C ϕ 4
R 32 = S ϕ 2 S ϕ 3 C ϕ 4
R 33 = C ϕ 2
and
p x = 0.107 S ϕ 2 C ϕ 1 + 0.088 C ϕ 1 + 0.384 S ϕ 1 S ϕ 3 C ϕ 2 + C ϕ 1 C ϕ 3 C ϕ 4 + 0.316 C ϕ 1 C ϕ 2 + 0.333 C ϕ 1
p y = 0.107 S ϕ 2 S ϕ 1 + 0.088 S ϕ 1 + 0.384 S ϕ 1 C ϕ 3 C ϕ 4 + S ϕ 3 C ϕ 1 C ϕ 2 + 0.316 S ϕ 1 C ϕ 2 + 0.333 S ϕ 1
p z = 0.107 C ϕ 2 + 0.384 S ϕ 2 C ϕ 3 C ϕ 4 + 0.316 S ϕ 2 + 0.333 S ϕ 2

2.2. Incremental Inverse Kinematics of Robot Arm

The 3D position vector x R 6 of the end-effector (EE) is given by:
y = f ( q )
In case of the Panda robot, there are n = 7 active joint angles q = ϕ 1 , , ϕ 7 T . With these joint angles(q) and direct kinematics f : R n R 6 , the goal is to solve the inverse kinematics q = f 1 ( y ) using incremental inverse kinematics. In incremental inverse kinematics, a direct kinematics is linearized around the current joint angle configuration q * as
δ x x * δ q q * .
The goal is to find the change in joint angles ( δ q ) that corresponds to a desired change in the end-effector position ( δ y ). The linearized form of the direct kinematics equations is expressed as 27, where
  • δ x y * represents the change in end-effector position around a reference point x * and
  • δ q q * represents the change in joint angles around a reference point q * .
The proportionality symbol (∝) indicates that the change in end-effector position is directly related to the change in joint angles. To solve for the change in joint angles, the Jacobian matrix, denoted as J f , is utilized. The Jacobian matrix is a matrix of partial derivatives that describes how the end-effector position f depends on the joint angles q . Specifically, the Jacobian matrix is defined as
J f : = f i q j i , j ,
  • where f i q j represents the partial derivative of the i t h component of the end-effector position with respect to the j t h joint angle.
By multiplying the Jacobian matrix J f by the change in joint angles δ q q * , an approximation of the change in the end-effector position δ x y * is obtained. The joint angles are iteratively updated to minimize the difference between the current end-effector position and the desired end-effector position.This can be efficiently solved around ( x * , q * ) using the Jacobian.
J f : = ( f i / q j ) i , j
Δ x x * = J f ( q * ) δ q q *

2.2.1. Steps of Incremental Inverse Kinematics

Given: target pose y ( t )
Required: joint angles q ( t )
  • Define the starting pose ( x ( 0 ) , q ( 0 ) ) and set up the Incremental inverse kinematics from (3.29)
    δ x x ( 0 ) = J ( q ( 0 ) ) δ q q ( 0 )
  • Determine the deviation δ y ( r ) relative to the target pose; e.g. ( y ( t ) y ( 0 ) )
  • Check for termination; e.g. max ( | δ y i , j ( r ) | ) ϵ .
  • Solve
    δ x ( k ) = J ( q ( k ) ) δ q ( k )
  • Calculate new joint angles
    q ( r + 1 ) = q ( r ) + δ q ( r )
  • r r + 1

3. Deep Reinforcement Learning Algorithm Design

3.1. Policy Gradient Algorithm

The policy gradient theorem states that the expected return to the policy parameters can be calculated as the sum of the action value function q π ( s , a ) multiplied by the policy function π ϱ ( s , a ) , summed over all states s and actions a, and weighted by the stationary distribution of states d π ( s ) .
O ( ϱ ) = s d π ( s ) a Q π ( s , a ) π ϱ ( s , a )
where:
  • Objective function( J ϱ ): Represents the expected cumulative reward obtained by following the policy π ϱ in the given environment. The objective function is optimized by adjusting the parameter ϱ to maximize the expected cumulative reward.
  • Discounted state distribution( d π ( s ) ): Represents the probability of being in a particular state s under the policy π . Mathematically,
    d π ( s ) = lim t P ( s t = s | s 0 , π ϱ )
    where s t = s when starting from s 0 and following policy π ϱ for t time steps.
  • Action-value function (( Q π ( s , a ) ): Represents the expected cumulative reward obtained by taking action a in state s and following the policy π thereafter.
  • Policy function( π ϱ ( s , a ) ): Represents the probability of taking action a in state s under the parameterized policy ϱ .
The policy gradient theorem helps to solve this problem by providing a formula for the gradient of the expected return to the policy parameters. This formula involves the stationary distribution of the Markov chain and is given by:
ϱ J ϱ = s d π ( s ) a q π ( s , a ) ϱ π ϱ ( a | s )
where q π ( s , a ) is the state-action value function for policy π ϱ .
s S d π ( s ) a A Q π ( s , a ) ϱ π ϱ ( s , a )

3.1.1. Derivation of Policy Gradient Theorem

ϱ V π ( s ) = ϱ a A π ϱ ( a | s ) Q π ( s , a )
using derivative product rule:
= a A ( ϱ π ϱ ( a | s ) Q π ( s , a ) + π ϱ ( a | s ) ϱ Q π ( s , a ) )
The steps of the derivation using, the derivative product rule from equation (4.6) are as follows:
Step 1: Apply the derivative product rule:
ϱ V π ( s ) = ϱ a A ( π ϱ ( a | s ) Q π ( s , a ) )
Step 2: Distribute the derivative operator inside the summation:
ϱ V π ( s ) = a A ϱ ( π ϱ ( a | s ) Q π ( s , a ) )
Step 3: Apply the chain rule to differentiate the product of π ϱ ( a | s ) and Q π ( s , a ) to ϱ :
ϱ V π ( s ) = a A ϱ π ϱ ( a | s ) Q π ( s , a ) + π ϱ ( a | s ) ϱ Q π ( s , a )
Step 4: Simplify the expression by rearranging the terms.
After applying the derivative product rule, the derivative of the state-value function V π ( s ) to the policy parameter ϱ is:
ϱ V π ( s ) = a A ϱ π ϱ ( a | s ) Q π ( s , a ) + π ϱ ( a | s ) ϱ Q π ( s , a )
Extend Q π ( s , a ) by incorporating the future state value. This can be done by considering the state-action pair ( s , a ) and summing over all possible future states s and corresponding rewards r:
= a A [ ϱ π ϱ ( a | s ) Q π ( s , a ) + π ϱ ( a | s ) ϱ s , r P ( s , r | s , a ) ( r + V π ( s ) ) ] ;
Since P ( s , r | s , a ) and r are not functions of ϱ , the derivative operator ϱ can be moved inside the summation over s , r without affecting these terms.
ϱ V π ( s ) = a A [ ϱ π ϱ ( a | s ) Q π ( s , a ) + π ϱ ( a | s ) s , r P ( s , r | s , a ) ϱ V π ( s ) ]
Next, it is observed that ϱ V π ( s ) is the derivative of the state-value function to the policy parameter ϱ at state s . This can be rewritten as ϱ V π ( s ) = ϱ V π ( s ) s P ( s | s , a ) . Here, P ( s | s , a ) represents the probability of transitioning to state s given the current state-action pair ( s , a ) . The following substitution is now made:
= a A ϱ π ϱ ( a | s ) Q π ( s , a ) + π ϱ ( a | s ) s , r P ( s , r | s , a ) ϱ V π ( s )
= a A ϱ π ϱ ( a | s ) Q π ( s , a ) + π ϱ ( a | s ) s P ( s | s , a ) ϱ V π ( s )
Where P ( s | s , a ) = r P ( s | s , a )
Now there is,
= ϱ V π ( s ) = a A [ ϱ π ϱ ( a | s ) Q π ( s , a ) + π ϱ ( a | s ) s P ( s | s , a ) ϱ V π ( s ) ]
Consider the following visiting sequence and identify the probability of changing state s to state x using policy π ϱ after k steps as ρ ( s x , k )
s a π ϱ ( . | s ) s a π ϱ ( . | s ) s a π ϱ ( . | s )
  • When k = 0 : ρ ( s s , k = 0 ) = 1 .
  • When k = 1 , consider every action that might be taken and add up the probabilities of reaching the desired state:
    ρ ( s x , k = 1 ) = a π ϱ ( a | s ) P ( s | s , a )
  • The goal is to move from s to x after k + 1 steps, by following π ϱ .The agent can first move from s to intermediate state s ( s S ) going to final state x in the last steps after the k stages. This allows to recursively update the visitation probability.
    ρ ( s x , k + 1 ) = s ρ π ( s s , k ) ρ ( s x , 1 )
After discussing the probability ρ ( s x , k ) for transitioning from state s to state x after a certain number of steps k, the next step is to drive a recursive formulation for ϱ V π ( s ) .
To accomplish this, a function ϕ ( s ) is introduced, defined as:
ϕ ( s ) = a A ϱ π ϱ ( a | s ) Q π ( s , a )
Here, Φ ( s ) represents the sum of the gradients of the policy π ϱ to ϱ , weighted by the corresponding action-value function Q π ( s , a ) .
To simplify (4.10)
ϱ V π ( s ) = Φ ( s ) + a A π ϱ ( a | s ) s P ( s | s , a ) ϱ V π ( s )
= ϕ ( s ) + a π ϱ ( a | s ) s P ( s | s , a ) ϱ V π ( s )
= Φ ( s ) + s a A π ϱ ( a | s ) P ( s | s , a ) ϱ V π ( s )
= Φ ( s ) + s a A π ϱ ( a | s ) P ( s | s , a ) ϱ V π ( s )
= Φ ( s ) + s ρ π ( s s , 1 ) ϱ V π ( s )
= Φ ( s ) + s ρ π ( s s , 1 ) ϕ ( s ) + s ρ π ( s s , 1 ) ϱ V π ( s )
Consider s as the middle point for s s
= Φ ( s ) + s ρ π ( s s , 1 ) ϕ ( s ) + s ρ π ( s s , 2 ) ϱ V π ( s )
The expression for ϱ V π ( s ) can be unrolled as follows:
= Φ ( s ) + s ρ π ( s s , 1 ) Φ ( s ) + s ρ π ( s s , 2 ) Φ ( s ) + s ρ π ( s s , 3 ) ϱ V π ( s )
ϱ V π ( s ) = x S k = 0 ρ π ( s x , k ) Φ ( x )
Eliminating the derivatives of the Q-value function ϱ Q π ( s ) and inserting objective function O ( ϱ ) in (4.5), starting from state S 0
ϱ O ( ϱ ) = ϱ V π ( S 0 )
= s k = 0 ρ π ( s x , k ) ϕ ( s ) ;
Let, η ( s ) = s ρ π ( s , x , k )
ϱ O ( ϱ ) = s η ( s ) ϕ ( s )
Substituting η ( s ) = s ρ π ( s , x , k ) in to eqn(4.24)
ϱ O ( ϱ ) = s η ( s ) ϕ ( s )
Normalize η ( s ) in (4.27), s S to be probability distribution as show down bellow:
ϱ O ( ϱ ) = s η ( s ) s η ( s ) s η ( s ) ϕ ( s )
Since the s η ( s ) is constant, the gradient of the objective function is proportional to the normalized η ( s ) and ϕ ( s )
ϱ O ( ϱ ) s η ( s ) s η ( s ) ϕ ( s )
Where d π ( s ) = η ( s ) s η ( s ) is stationary distribution.
In the episodic case, the constant proportionality s η ( s ) is the average length of an episode; In the continuing case, it is one(1) [31].
= s d π ( s ) a A ϱ π ϱ ( a | s ) Q π ( s , a )
ϱ O ( ϱ ) s S d π ( s ) a A Q π ( s , a ) ϱ π ( ϱ ) ( a | s )
ϱ O ( ϱ ) = s S d π ( s ) ϱ π ( ϱ ) ( a | s ) π ( ϱ ) ( a | s )
= s S d π ( s ) ϱ π ( ϱ ) ( a | s ) π ( ϱ ) ( a | s )
ϱ O ( ϱ ) = E π { Q π ( s , a ) ϱ ln ( π ϱ ( a | s ) ) }
Where E π refers to E s d π , a π ϱ when the distribution of states and actions follows policy π ϱ (on policy).

3.1.2. Off-Policy Policy Gradient

Since DDPG is one of the off-policy Policy gradient Algorithms, First let’s talk about off-policy Policy gradient Algorithms in great detail.
The behavior policy for collecting samples is known and labeled as α ( a | s ) the objective function sums up the reward over the state distribution defined by this behavior policy:
O ( ϱ ) = s S d α ( s ) a A Q π ( s , a ) π ϱ ( a | s )
= E s d α a A Q π ( s , a ) π ϱ ( a | s )
where d α ( s ) is the stationary distribution of the behavior policy α since d α ( s ) = lim t P ( S t = s | S 0 , α ) and Q π is the action-value function estimated about the target policy π . Given that the training observations are sampled by a α ( a | s ) , the gradient can be rewritten as:
ϱ O ( ϱ ) = ϱ E s d α a A Q π ( s , a ) π ϱ ( a | s )
By Derivative product rule.
= E s d α a A Q π ( s , a ) ϱ π ϱ ( a | s ) + π ϱ ( a | s ) ϱ Q π ( s , a )
By ignoring π ϱ ( a | s ) ϱ Q π ( s , a )
( i ) E s d α a A Q π ( s , a ) ϱ π ϱ ( a | s )
= E s d α a A α ( a | s ) π ϱ ( a | s ) α ( a | s ) Q π ( s , a ) ϱ π ϱ ( a | s ) π ϱ ( a | s )
= E α π ϱ ( a | s ) α ( a | s ) Q π ( s , a ) ϱ ln π ϱ ( a | s )
Where π ϱ ( a s ) α ( a s ) is the importance weight. Since Q π is a function of the target policy and, consequently, a function of the policy parameter ϱ , the derivative ϱ Q π ( s , a ) must also be computed using the product rule. However, computing ϱ Q π ( s , a ) directly is challenging in practice. Fortunately, by approximating the gradient and ignoring the gradient of Q π , policy improvement can still be guaranteed, and eventual convergence to the true local minimum is achieved.
In summary, when applying policy gradient in the off-policy setting, it can be adjusted by a weighted sum, where the weight is the ratio of the target policy to the behavior policy, π ϱ ( a | s ) α ( a | s ) [31].

3.2. Deterministic Policy Gradient (DPG)

The policy function π ( · | s ) is typically represented as a probability distribution over actions A based on the current state, making it inherently stochastic. However, in the case of the Deterministic Policy Gradient (DPG), the policy is modeled as a deterministic decision, denoted as a = μ ( s ) . Instead of selecting actions probabilistically, DPG directly maps states to specific actions without uncertainty. Let:
  • ρ 0 ( s ) The initial distribution over states
  • ρ μ ( s s , k ) : Starting from state s, the visitation probability density at state s after moving k steps by policy μ .
  • ρ μ ( s ) : Discounted state distribution, defined as
    ρ μ ( s ) = S k = 1 γ k 1 ρ 0 ( s ) ρ μ ( s s , k ) d s
The objective function to optimize for is listed as follows:
O ( ϱ ) = S ρ μ ( s ) Q ( s , μ ϱ ( s ) ) d s
According to the chain rule, first, take the gradient of Q with respect to the action a and then take the gradient of the deterministic policy function μ w.r.t. ϱ :
ϱ O ( ϱ ) = S ρ μ ( s ) a Q μ ( s , a ) ϱ μ ϱ ( s ) | a = μ ϱ ( s ) d s
= E s ρ μ [ a Q μ ( s , a ) ϱ μ ϱ ( s ) | a = μ ϱ ( s ) ]

3.3. Deep Deterministic Policy Gradient (DDPG)

By combining DQN and DPG, DDPG leverages the power of deep neural networks to handle high-dimensional state spaces and complex action spaces, making it suitable for a wide range of reinforcement learning tasks. The original DQN works in discrete space, and DDPG extends it to continuous space with the actor-critic framework while learning a deterministic policy. In order to do better exploration, an exploration policy μ is constructed by adding noise N :
μ ( s ) = μ ϱ ( s ) + N
Moreover, the DDPG algorithm integrates a sophisticated technique known as soft updates, or conservative policy iteration, to update the parameters of both the actor and critic networks. This revised methodology utilizes a small parameter, denoted as τ , which is much smaller than 1( τ 1 ).
The soft update equation is formulated as
ϱ τ ϱ + ( 1 τ ) ϱ
It guarantees that the target network values alter gradually over time, unlike the approach employed in DQN, where the target network remains static for a fixed period.

3.4. Working of DDPG Algorithm

Algorithm 1: Deep Deterministic Policy Gradient
Preprints 147339 i001

4. Results And Discussions

4.1. Software Configuration

The deep reinforcement learning agent was trained using Python within a Jupyter Notebook on a Linux Ubuntu 20.04 operating system. The training process spanned approximately 5.415 hours on a modest hardware configuration consisting of an Intel graphics card, 8 GB of RAM, and a 1.9 GHz processor.

4.2. Hyper Parameter Selection and Initial Search

Parameters given in Table 2 are selected in this work.

4.2.1. Batch Size Comparison

Upon completing the training process, it was observed that there was a minimal disparity in the success rate, Figure 2, and cumulative reward, Figure 3, achieved across the different batch sizes. Also, the decrease in critic loss values, Figure 4 and actor loss values, Figure 5 indicates an improvement in the actor and critic networks’ ability to approximate the optimal policy and value functions. Although the success rate and cumulative reward were similar, the enhanced convergence demonstrated by the lower losses in the 2048 batch size suggests a more efficient learning process and a potentially higher quality of learned policies.

4.2.2. Learning Rate Comparison

After the training process, observations, Figure 8 revealed that there was a minimal disparity in the cumulative reward and success rate achieved between the two learning rates as shown in Figure 7. However, the learning rate of 2e-4 displayed slightly superior performance compared to 1e-3 in terms of success rate. Conversely, when using the learning rate of 1e-3, a notable decrease in actor loss, Figure 9 and critic loss, Figure 10 was observed, indicating improved policy and value estimation by the agent. Despite comparable cumulative reward and success rates, the reduced losses at the learning rate of 1e-3 signify enhanced convergence and a potentially more efficient learning process, suggesting the agent may have acquired higher-quality learned policies.

4.3. Selection of Optimal Hyperparameters and Extended Training of DDPG Agent

After comparing the hyperparameters, as depicted in the preceding figures, and conducting a thorough analysis of the associated results, the selection of hyperparameters with promising performance was undertaken. Following this, an extended training phase was initiated, encompassing 100,000 time steps. This extended training phase serves as the fundamental training stage, which will be elaborated upon in the subsequent section.
Table 3. Parameters and Selected Hyper parameters.
Table 3. Parameters and Selected Hyper parameters.
Parameter Value
Policy MultiInputPolicy
Replay buffer class HerReplayBuffer
Verbose 1
Gamma 0.95
Tau ( τ ) 0.005
Batch size 2048
Buffer size 100000
Replay buffer kwargs rb kwargs
Learning rate 1e-3
Action noise Normal action noise
Policy kwargs Policy kwargs
Tensorboard log Log path
Table 4. Training Metrics at 200 time step and at 100 , 000 time step.
Table 4. Training Metrics at 200 time step and at 100 , 000 time step.
Category Value Category Value
rollout/ rollout/
Episode length 50 Episode length 50
Episode mean reward -49.2 Episode mean reward -1.8
Success rate 0 Success rate 1
time/ time/
Episodes 4 Episodes 2000
FPS 18 FPS 5
Time elapsed 10 Time elapsed 19505
Total timesteps 200 Total time steps 100000
train/ train/
Actor loss 0.625 Actor loss 0.0846
Critic loss 0.401 Critic loss 0.00486
Learning rate 0.001 Learning rate 0.001
Number of updates 50 Number of updates 99850

4.3.1. Improvement in Cumulative Reward and Success Rate

The mean episode reward, Figure 11 improves from -49.2 in the first loop to -1.8 in the last loop. The success rate, Figure 12 increases from 0 in the first loop to 1 in the last loop.

4.3.2. Frames per Second (FPS)

The training speed, Figure 13 decreases from 18 frames per second (FPS) in the first loop to 5 FPS in the last loop.

4.3.3. Improvement in Actor and Critic Losses

The actor loss, Figure 14 decreases from 0.625 in the first loop to 0.0846 in the last loop. The critic loss, Figure 15 decreases from 0.401 in the first loop to 0.00486 in the last loop.

4.4. Comparing DDPG and PPO: Off-Policy vs. On-Policy Reinforcement Learning Algorithms

Proximal policy optimization (PPO), an on-policy reinforcement learning algorithm, was trained to compare its performance with Deep deterministic policy gradient (DDPG), an off-policy algorithm, as shown in Figure 16 and Figure 17. The cumulative reward achieved by DDPG was 1.8 , whereas the cumulative reward obtained by PPO was 50 as shown in Figure 16. The results of this comparison indicate that, in this particular scenario, DDPG exhibited superior performance over PPO in terms of cumulative reward.

5. Conclusion

In this study, the Deep Deterministic Policy Gradient (DDPG) algorithm is applied to train a robotic arm manipulator, specifically the Franka Panda robotic arm, for a target-reaching task. The objective of this task is to enable the robotic arm to accurately reach a designated target position. The DDPG algorithm is chosen because of its effectiveness in continuous control tasks and its ability to learn policies with high-dimensional action spaces. By leveraging a combination of deep neural networks and actor-critic architecture, DDPG approximates the optimal policy for the robotic arm. When comparing the performance of PPO and DDPG after training for 100 , 000 time steps:
PPO achieved a mean episode reward of 50 indicating that the agent struggled to achieve positive rewards on average. Despite training at a relatively fast speed of 561 F P S , the results suggest that PPO faced challenges in finding successful strategies for the given task.
On the other hand, DDPG demonstrated superior performance with a mean episode reward of 1.8 . It achieved a success rate of 1, indicating consistent success in reaching desired outcomes. Despite a slower training speed of 5 F P S . DDPG showcased its capability to effectively learn and improve its policy over time. Based on these results, DDPG outperformed PPO in terms of cumulative reward and success rate in the given scenario.

Author Contributions

The authors contributions in this manuscript are stated as follows: Conceptualization, L.H. and Y.A.; methodology, L.H.; software, L.H.; validation, A.T., Y.S.J. and S.J.; formal analysis, L.H.; investigation, A.T., Y.S.J.; resources, L.H.; data curation, L.H.; writing—original draft preparation, L.H.; writing—review and editing, Y.A., and A.T., and S.J.; visualization, A.T. and L.H.; supervision, Y.A. and S.J.; project administration, S.J.; funding acquisition, S.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the project for Smart Manufacturing Innovation R&D funded Korean Ministry of SMEs and Startups in 2024(Project No.RS-2024-00434311).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable’

Data Availability Statement

All the required data in this work are available with the authors and they can be provided upon request.

Acknowledgments

This work was supported by the project for Smart Manufacturing Innovation R&D funded Korean Ministry of SMEs and Startups in 2024(Project No.RS-2024-00434311).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Mohsen, S.; Behrooz, A.; Roza, D. Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cognitive Robotics 2023, 3, 54–70. [Google Scholar]
  2. Sridharan, M.; Stone, P. Color Learning on a Mobile Robot: Towards Full Autonomy under Changing Illumination. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI); 2007; pp. 2212–2217. [Google Scholar]
  3. Xinle, Y.; Minghe, Sh.; Lingling, Sh. Adaptive and intelligent control of a dual-arm space robot for target manipulation during the post-capture phase. Aerospace Science and Technology 2023, 142, 108688. [Google Scholar]
  4. Abayasiri, R. A. M.; Jayasekara, A. G. B. P.; Gopura, R. A. R. C.; Kazuo, K. Intelligent Object Manipulation for a Wheelchair-Mounted Robotic Arm. Journal of Robotics, 2024. [Google Scholar]
  5. Mohammed, M. A.; Hui, L.; Norbert, St.; Kerstin, Th. Intelligent arm manipulation system in life science labs using H20 mobile robot and Kinect sensor. 2016 IEEE 8th International Conference on Intelligent Systems (IS), Sofia, Bulgaria, 2016. [Google Scholar]
  6. Yoshiyuki Ohmura and Yasuo Kuniyoshi. Humanoid robot which can lift a 30kg box by whole body contact and tactile feedback. 007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 2007; pp. 1136–1141. [Google Scholar]
  7. Li, Z.; Ming, J.; Dewan, F.; Hossain, M.A. Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot. Expert Systems with Applications 2013, 40, 5160–5168. [Google Scholar]
  8. Martin, J. G. Muros, F. J., Maestre, J. M., and Camacho, E. F. Multi-robot task allocation clustering based on game theory. Robotics and Autonomous Systems. Robotics and Autonomous Systems 2023, 161, 104314. [Google Scholar] [CrossRef]
  9. Nguyen, M. N. T. Ba, D. X. A neural flexible PID controller for task-space control of robotic manipulators. Frontiers in Robotics and AI 2023, 9, 975850. [Google Scholar]
  10. Laurenzi, A. Antonucci, D., Tsagarakis, N. G., and Muratore, L. The XBot2 real-time middleware for robotics. Robotics and Autonomous Systems. Robotics and Autonomous Systems 2023, 163, 104379. [Google Scholar] [CrossRef]
  11. Zhang, L. Jiang, M., Farid, D., and Hossain, M. A. Intelligent Facial Emotion Recognition and Semantic-Based Topic Detection for a Humanoid Robot. Expert Systems with Applications 2013, 40, 5160–5168. [Google Scholar] [CrossRef]
  12. Floreano, D. Wood, R. J. Science, Technology, and the Future of Small Autonomous Drones. Nature 2015, 521, 460–466. [Google Scholar] [CrossRef] [PubMed]
  13. Chen, T. D. , Kockelman, K. M., and Hanna, J. P. Operations of a Shared, Autonomous, Electric Vehicle Fleet: Implications of Vehicle & Charging Infrastructure Decisions. Transportation Research Part A: Policy and Practice, 2016. [Google Scholar]
  14. Chen, Z. Jia, X., Riedel, A., and Zhang, M. A Bio-Inspired Swimming Robot. 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014; pp. 2564–2564. [Google Scholar]
  15. Ohmura, Y. and Kuniyoshi, Y. Humanoid Robot Which Can Lift a 30kg Box by Whole Body Contact and Tactile Feedback. 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems; 2007; pp. 1136–1141. [Google Scholar]
  16. Kappassov, Z. Corrales, J.-A., and Perdereau, V. Tactile Sensing in Dexterous Robot Hands. Robotics and Autonomous Systems 2015, 74, 195–-220. [Google Scholar] [CrossRef]
  17. Arisumi, H. Miossec, S., Chardonnet, J.-R., and Yokoi, K Dynamic Lifting by Whole Body Motion of Humanoid Robots. 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems; 2008; pp. 668–675. [Google Scholar]
  18. Aryslan, M. Yevgeniy, L., Troy, H., Richard, P. A deep reinforcement-learning approach for inverse kinematics solution of a high degree of freedom robotic manipulator. Robotics 2022, 11, 44. [Google Scholar] [CrossRef]
  19. Serhat, O. Enver T., Erkan Z. Adaptive Cartesian space control of robotic manipulators: A concurrent learning based approach. Journal of the Franklin Institute 2024, 361, 106701. [Google Scholar]
  20. Kaelbling, L. P. Littman, M. L., and Moore, A. Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 1996, 4, 237–285. [Google Scholar] [CrossRef]
  21. Fadi, Al. , Katarina Gr. Reinforcement learning algorithms: An overview and classification. 2021 IEEE Canadian Conference on Elec-trical and Computer Engineering (CCECE); 2021; pp. 1–7. [Google Scholar]
  22. Thrun, S.; and Littman, M. L. Reinforcement Learning: An Introduction. AI Magazine 2000, 21, 103-. [Google Scholar]
  23. Smruti Amarjyoti. Deep reinforcement learning for robotic manipulation-the state of the art. arXiv:1701.08878, 2017.
  24. Jens, K., Bagnell. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research 2013, 32, 1238–1274. [Google Scholar]
  25. Tianci, G. Optimizing robotic arm control using deep Q-learning and artificial neural networks through demonstration-based methodologies: A case study of dynamic and static conditions. Robotics and Autonomous Systems 2024, 104771. [Google Scholar]
  26. Andrea, Fr., Elisa. Robotic Arm Control and Task Training through Deep Reinforcement Learning. 2020, arXiv:2005.02632v1. [Google Scholar]
  27. Jonaid, Sh., Michael. Optimizing Deep Reinforcement Learning for Adaptive Robotic Arm Control. 2024, arXiv:2407.02503v1. [Google Scholar]
  28. Roman, P. , Jakub, K. Computation 2024, 12(6), 116. [Google Scholar]
  29. Wanqing, X., Yuqian. Deep reinforcement learning based proactive dynamic obstacle avoidance for safe human-robot collaboration. Manufacturing Letters 2024, 1246–1256. [Google Scholar]
  30. Franka Emika Documentation. Control Parameters Documentation, 2024. Available at: CrossRef.
  31. Weng, L. Policy Gradient Algorithms. Lil’Log, 2018. 2024. Available online: https://lilianweng.github.io/posts/2018-04-08-policy-gradient.
Figure 1. DH Axis Representation of Franka Panda robot [30].
Figure 1. DH Axis Representation of Franka Panda robot [30].
Preprints 147339 g001
Figure 2. Success Rate In Different Batch Sizes.
Figure 2. Success Rate In Different Batch Sizes.
Preprints 147339 g002
Figure 3. Cumulative Mean Reward In Different Batch Sizes.
Figure 3. Cumulative Mean Reward In Different Batch Sizes.
Preprints 147339 g003
Figure 4. Critic Loss In Different Batch Sizes.
Figure 4. Critic Loss In Different Batch Sizes.
Preprints 147339 g004
Figure 5. Actor Loss In Different Batch Sizes.
Figure 5. Actor Loss In Different Batch Sizes.
Preprints 147339 g005
Figure 6. Frames per Second In Different Batch Sizes
Figure 6. Frames per Second In Different Batch Sizes
Preprints 147339 g006
Figure 7. Success Rate In Different Learning Rate
Figure 7. Success Rate In Different Learning Rate
Preprints 147339 g007
Figure 8. Cumulative Mean Reward In Different Learning Rate
Figure 8. Cumulative Mean Reward In Different Learning Rate
Preprints 147339 g008
Figure 9. Actor Loss In Different Learning Rates
Figure 9. Actor Loss In Different Learning Rates
Preprints 147339 g009
Figure 10. Critic Loss In Different Learning Rates.
Figure 10. Critic Loss In Different Learning Rates.
Preprints 147339 g010
Figure 11. Improved Cumulative Mean Reward
Figure 11. Improved Cumulative Mean Reward
Preprints 147339 g011
Figure 12. Improved Success Rate
Figure 12. Improved Success Rate
Preprints 147339 g012
Figure 13. Frames Per Second (FPS) or Training Speed
Figure 13. Frames Per Second (FPS) or Training Speed
Preprints 147339 g013
Figure 14. Improved Actor Loss
Figure 14. Improved Actor Loss
Preprints 147339 g014
Figure 15. Improved Critic Loss
Figure 15. Improved Critic Loss
Preprints 147339 g015
Figure 16. Cumulative Mean Reward.
Figure 16. Cumulative Mean Reward.
Preprints 147339 g016
Figure 17. Training Speed.
Figure 17. Training Speed.
Preprints 147339 g017
Table 1. DH Axis Representation.
Table 1. DH Axis Representation.
Joint a(m) d(m) δ (m) ϕ (rad)
1 0 0.333 0 ϕ 1
2 0 0 -90 ϕ 2
3 0 0.316 90 ϕ 3
4 0.0825 0 90 ϕ 4
5 -0.0825 0.384 -90 ϕ 5
6 0 0 90 ϕ 6
7 0.088 0 90 ϕ 7
Flange 0 0.107 0 0
Table 2. Parameters and Hyper Parameters.
Table 2. Parameters and Hyper Parameters.
Parameter Value
Policy MultiInputPolicy
Replay buffer class HerReplayBuffer
Verbose 1
Gamma 0.95
Tau ( τ ) 0.005
Batch size 512,1024,2048
Buffer size 100000
Replay buffer kwargs rb kwargs
Learning rate 1e-3 , 2e-4
Action noise Normal action noise
Policy kwargs Policy kwargs
Tensorboard log Log path
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated