Conference paper
Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: Comparative performance analysis (accuracy, speed, and power)
Abstract
We review our work towards achieving competitive performance (classification accuracies) for on-chip machine learning (ML) of large-scale artificial neural networks (ANN) using Non-Volatile Memory (NVM)-based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25×) and lower-power (from 120-2850×) ML training than GPU-based hardware.
Related
Conference paper
Inference of Deep Neural Networks with Analog Memory Devices
Conference paper
Accelerating Deep Neural Networks with Analog Memory Devices
Conference paper