When implementing SVMs, two major problems are encountered: (a) the number of local minima increases exponentially with the number of samples and (b) the quantity of required computer storage, required for a regular quadratic programming solver, increases by an exponential mag-nitude as the problem size expands. The Kernel-Adatron family of algorithms gaining attention lately which has allowed it to handle very large classification and regression problems. Howev-er, these methods treat different types of samples (Noise, border, and core) in the same manner, which causes searches in unpromising areas and increases the number of iterations. In this work, we introduce a hybrid method to overcome these shortcomings, namely Optimal Recurrent Neu-ral Network Density Based Support Vector Machine (Opt-RNN-DBSVM). This method consists of four steps: (a) characterization of different samples, (b) elimination of samples with a low probability of being a support vector, (c) construction of an appropriate recurrent neural network based on an original energy function, and (d) solution of the system of differential equations, managing the dynamics of the RNN, using the Euler-Cauchy method involving an optimal time step. The RNN remembers the regions explored during the search process thanks to its recurrent architecture. We demonstrated that RNN-SVM converges to feasible support vectors and Opt-RNN-DBSVM has a very low time complexity compared to RNN-SVM with constant time step, and KAs-SVM. Several experiments were performed on academic data sets. We used several classification performances measures to compare Opt-RNN-DBSVM to different classification methods and the results obtained show the good performance of the proposed method.