This technology is a static random-access memory (SRAM) featuring capacitive-coupling based in-memory-computing (IMC) circuits that uses reduced energy and provides extremely fast and accurate computation for machine learning algorithms.
Current methods for computing in deep neural networks are limited in energy consumption, parallelism, storage, and accuracy. Static random-access memory (SRAM) is one of the key bottlenecks, as it has limited parallelism due to the row-by-row access. As such, there remains a need for a circuit design that supports energy efficiency, parallelism, and simultaneous multi-row computation without negatively affecting speed, accuracy, or memory.
This technology uses a SRAM design based on capacitive-coupling computing which supports array-level fully-parallel computation, multi-bit outputs, and configurable multi-bit inputs. The fully parallel computation is supported by an 8T1C bitcell that computes bitwise XNOR using capacitive coupling. To improve accuracy, multi-bit inputs are used. This design demonstrates low-energy usage and can flexibly map representative convolutional and deep neural networks with high accuracies.
This technology has been validated via test chip measurement. It maps neural networks (multi-layer perceptron and convolutional neural network) using MNIST and CIFAR-10 datasets.
IR CU19057
Licensing Contact: Greg Maskel