Columbia Technology Ventures

Energy-efficient circuit for fast and accurate computation of deep neural network algorithms

This technology is a static random-access memory (SRAM) featuring capacitive-coupling based in-memory-computing (IMC) circuits that uses reduced energy and provides extremely fast and accurate computation for machine learning algorithms.

Unmet Need: Low-energy hardware for parallel computing

Current methods for computing in deep neural networks are limited in energy consumption, parallelism, storage, and accuracy. Static random-access memory (SRAM) is one of the key bottlenecks, as it has limited parallelism due to the row-by-row access. As such, there remains a need for a circuit design that supports energy efficiency, parallelism, and simultaneous multi-row computation without negatively affecting speed, accuracy, or memory.

The Technology: SRAM circuit for fast, accurate, and fully- parallel computing

This technology uses a SRAM design based on capacitive-coupling computing which supports array-level fully-parallel computation, multi-bit outputs, and configurable multi-bit inputs. The fully parallel computation is supported by an 8T1C bitcell that computes bitwise XNOR using capacitive coupling. To improve accuracy, multi-bit inputs are used. This design demonstrates low-energy usage and can flexibly map representative convolutional and deep neural networks with high accuracies.

This technology has been validated via test chip measurement. It maps neural networks (multi-layer perceptron and convolutional neural network) using MNIST and CIFAR-10 datasets.

Applications:

  • Computers
  • Smart phones
  • Smart watches
  • Smart speakers
  • Self-driving cars

Advantages:

  • Energy efficient circuit design
  • Highly accurate computing of machine learning algorithms
  • Fast computational speed

Lead Inventor:

Mingoo Seok, Ph.D.

Patent Information:

Patent Status

Related Publications:

Tech Ventures Reference: