Convolutional Neural Networks (CNNs) have become state-of-the art for classification tasks due to theirsuperior accuracy. BinArray is a custom hardware accelerator for CNNs with binary approximated weights. The binary approximation used is a network compression technique that drastically reduces the number of multiplications required per inference with no or very little accuracy degradation. BinArray scales and allows to compromise between hardware resource usage and throughput by means of three design parameters transparent to the user. Furthermore, it is possible to select between high accuracy or throughput dynamically during runtime. BinArray has been optimized at the register transfer level and operates at 400 MHz as instruction-set processor within a heterogenous XC7Z045-2 FPGA-SoC platform. Experimental results show that BinArray scales to match the performance of other accelerators for different network sizes. Even for the largest MobileNet only 50% of the target device and only 96 DSP blocks are utilized.