Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Resource-Efficient Optimization for FPGA-Based Convolution Accelerator

Version 1 : Received: 24 July 2023 / Approved: 25 July 2023 / Online: 26 July 2023 (11:22:49 CEST)

How to cite: Ma, Y.; Xu, Q.; Song, Z. Resource-Efficient Optimization for FPGA-Based Convolution Accelerator. Preprints 2023, 2023071705. https://doi.org/10.20944/preprints202307.1705.v1 Ma, Y.; Xu, Q.; Song, Z. Resource-Efficient Optimization for FPGA-Based Convolution Accelerator. Preprints 2023, 2023071705. https://doi.org/10.20944/preprints202307.1705.v1

Abstract

Convolution forms one of the most essential operations for the FPGA-based hardware accelerator. However, the existing designs often neglect the inherent architecture of FPGA, which puts forward an austere challenge on hardware resource requirements. Even though some previous works have proposed approximate multipliers or convolution acceleration algorithms to deal with this issue, the inevitable accuracy loss and resource occupation easily lead to performance degradation. Toward this, we first propose two kinds of resource-efficient optimized accurate multipliers based on LUTs or carry chains. Then targeting FPGA-based platforms, a generic multiply-accumulate structure is constructed by directly accumulating the partial products produced by our proposed optimized radix-4 Booth multipliers without intermediate multiplication and addition results. Experimental results demonstrate that our proposed multiplier achieves a maximum 51% look-up-table (LUT) reduction compared to the Vivado area optimized multiplier IP. Furthermore, the convolutional process unit using the proposed structure achieves a 36% LUT reduction compared to existing methods. As case studies, the proposed method is applied to DCT transformer and LeNet to achieves hardware resource saving without loss of accuracy.

Keywords

Convolution; Multiplier; Look-up table; Carry chain; FPGA.

Subject

Engineering, Electrical and Electronic Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.