Deep neural networks with powerful auto-optimization tools have been widely applied in various research fields, e.g., NLP and computer vision. However, existing neural network architectures typically are constructed using different inductive biases, e.g., preconceptions, expecting to decrease parameter search space during training, reduce computational cost or introduce expert knowledge in the neural network design. As an alternative, Multilayer Perceptron (MLP) provides much better freedom for exploration, has a lower inductive bias than convolutional neural networks (CNNs), and offers good flexibility in learning complex patterns. Even though, such neural architectures are commonly built in a flat Euclidean space, which is not necessarily the optimal space for any data, and is especially not good for modeling hierarchical correlations. Hyperbolic neural networks (HNNs) have gained attention for their ability to capture hierarchical structures present in complex data types like graphs. Recently, there has been an increasing interest to extend HNNs to computer vision tasks, motivated by the observations that images possess rich hierarchical relations. However, this is generally applied by employing a Euclidean backbone for learning higher-level semantic representations and only incorporating a hyperbolic classifier for classification, which, we argue, does not make full use of the advantage of hyperbolic space. Considering the recovery of the attention-free Multilayer Perceptron(MLP), in this paper, we extend it to non-Euclidean space and propose a novel architecture, named Hyperbolic Res-MLP (HR-MLP), that leverages fully hyperbolic layers to learn feature embeddings and perform image classification in an end-to-end fashion. With the help of the proposed Lorentz cross-patch and cross-channel layers, we can directly perform operations in the hyperbolic domain with fewer parameters, making it faster to train and providing comparatively better performance than its Euclidean counterpart. Experiments on CIFAR10, CIFAR100, and MiniImageNet demonstrate a comparable and superior performance when compared to Euclidean baselines. Our code is available at (https://github.com/Ahmad-Omar-Ahsan/HR-MLP)