Pixel array chips are widely used in physical experiments. The data rate of the pixel array chip can be considerably high (640Mbps for Topmetal-M), and the analog output is generally converted to a digital value by a low-precision ADC at the output end. When analyzing the quantized data for online pattern recognition, neural networks and deep learning are promising methods to perform on-site preprocessing and alleviate the burden of offline analysis. In deep learning, the basic arithmetic unit is the multiply-and-accumulate (MAC). The current MAC design targets data types of FP32, FP16, and INT8. However in pixel sensor applications, lower precision is helpful to reduce transmitted data without greatly decreasing the discrimination ability. In this abstract, we use the 3-bit low-precision data from the Topmetal-M pixel array chip as an example. We design the 3-bit Carry-Ripple Adder, Carry Skip Adder, Carry Look-ahead Adder, Parallel Prefix Adder and other structural adders. Besides, Array multiplication, Booth multiplication and other structural multiplications are also implemented. Comprehensive analysis, aiming at the best power, performance and area (PPA) is conducted. Based on the analysis, we propose a new low-precision multiply-and-accumulate (LPMAC) based on the Carry Look-ahead Adder, Carry Save Adder and the Booth Coded multiplier to support Low-precision operations in neural networks. These designs have been verified using a commercial 130nm process and have been functionally simulated successfully on FPGAs.