A detailed look at HP’s efforts in applying memristor technology to DARPA’s cognitive computing initiatives via Neuromorphic Engineer. (How’s that for a publication title!) Be warned — the link is to a PDF. [The article was found via a neat website that’s also worth exploring in greater depth, Neurdron.)
Classical implementations of large-scale neural systems in computers use resources such as central processing unit (CPU) and graphics processing unit (GPU) cores, mass memory storage, and parallelization algorithms. Designs for such systems must cope with power dissipation from data transmission between processing and memory units. By some estimates, this loss is millions of times the power required to actually compute, in the sense of creating meaningful new register contents. Such a high transmission loss is unavoidable as long as memory and computation are physically distant. The creation of an electronic brain stuffed into the volume of a mammalian brain is thus impossible via conventional technology.
The Defense Advanced Research Projects Agency (DARPA)-sponsored Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project is looking for hardware solutions that reduce power consumption by electronic synapses to achieve memory density of 1015 bits per square centimeter. One approach is based on memristive devices. The memristor, initially theorized by University of California, Berkeley Professor Leon Chua and later discovered by HP Labs, has the unique property of remembering its stimulation history in its resistive state. It does not require power to maintain its memory, making it ideal for implementing dense, low-power synapses supporting large-scale neural models. The challenge is to build a software platform able to exploit the memristor’s capacities.
This platform, named Cog ex Machina3 (Cog), is being developed at Hewlett-Packard by Greg Snider. Cog abstracts away the underlying hardware and allocates processing resources by computational algorithms based on CPU/GPU availability. Cog exposes a programming interface that enforces synchronous parallel processing of neural data encoded as multidimensional arrays (tensors).
Our Modular Neural Exploring Traveling Agent (MoNETA) project,4 supported by DARPA/SyNAPSE via a subcontract with HP, uses Cog to progressively implement complex, wholebrain systems able to leverage the power of memristive hardware that is yet to be designed. MoNETA is the brain of an animat, a neuromorphic agent autonomously learning to perform complex behaviors in a virtual environment. It combines visual scene analysis, spatial navigation, and plasticity. The system is intended to replicate a rodent’s learning to swim to a submerged platform in the Morris water maze task, a behavior that involves cooperation among several brain areas. The MoNETA brain will eventually implement many cortical and subcortical areas that will allow an animat or robot to engage with a virtual or real environment.