Abstract:
The goal of the technology known as Field Programmable Gate Arrays (FPGA) is to improve the safety,
performance, and efficiency of cryptographic operations in contexts with limited resources. The use of
deep learning has been more important in recent years, particularly with regard to the achievement of low
latency and space efficiency in FPGA-based implementations. This study paper gives According to the
suggested model, which is called EffiConvNet (Efficient Convolution Network), ternary neural networks,
logic expansion, and block convolution are all integrated. Block Convolution is a technique that tries to
optimise the data dependence among the spatial tiles. This helps to ease the load on chip memory and
facilitates efficient processing. In order to do this, logic expansion is used, which replaces the XNOR gates
with neural networks. This allows for more effective utilisation of resources. In order to achieve the desired
degree of efficiency at the training stage, further ternary neural networks are used. The experimental
results of our technique on real-world tasks reveal that it is successful. Furthermore, these coupled
architectures together (EffiConvNet) illustrate the efficacy of our approach. While assuring optimal
resource utilisation and better inference performance, the combination strategy that has been described
offers a potential option for addressing the obstacles that are connected with the deployment of large-scale
neural networks on FPGAs.