The University of Michigan has developed a computer based on the …

AI on a smartphone? U of M researchers open the door to localized AI processing, even on wearable devices.

Depending on who you ask, AI is already everywhere. From making processes more efficient to causing a stir in disease diagnosis, the presence of AI can be felt across all industries and on the smartphone in your pocket.

But is AI really present on your devices if processing is done in the cloud?

Currently, hardware limitations mean that localized AI – that is, AI that deals with data at the edge rather than the cloud – is rather out of reach.

What the University of Michigan just announced this week is not simply a computer that simply implements memristors; This is a device that heralds advances in localized AI.

Meet “the first programmable memristor computer”.

A Programmable Memristor Computer

“Everyone wants to put an artificial intelligence processor in smartphones, but you don’t want your cell phone battery to drain very quickly,” says Wei Lu, professor of electrical and computer engineering at the University of Michigan (UM). Lu is the lead author of “A Fully Integrated Reprogrammable Memristor System – CMOS for Efficient Multiple Accumulation Operations” published in Nature Electronics.

Lu refers to the incredible amount of battery power it would take an average handheld to maintain AI data processing levels.

As it stands now, AI functions like voice command interpretation require communication with remote cloud-based AI engines. This takes time, and is currently unavoidable because so much AI power in a smartphone would drain the battery very quickly.

The memristor array chip in question, shown here, is connected to the custom computer chip. Photo by Robert Coelius, Michigan Engineering Communications and Marketing.

According to Lu and his team, computer systems based on memristors may be the answer.

What is a memristor?

A memristor can be described as a resistor whose resistance value is determined by the previous voltages and the load to which it has been subjected. If the memristor is not subjected to more voltage or load, the resistance does not change, the way nonvolatile memory does.

This is analogous to the most basic unit in digital logic, the flip-flop. Once the flip-flop output goes to “1” or “0”, it remains at that value. Therefore, the resistive value of the memristor has the same purpose as the memory value “1” or “0” stored by the flip-flop.

To learn more about the basics of memristor / AI, see our previous article on a prototype memristor network inspired by mammalian brains, which builds on some of Professor Lu’s earlier work.

The key to more efficient AI

Machine learning and artificial intelligence algorithms must deal with huge amounts of data to do things like identify objects in photos and videos. The current state of the art relies on separate GPUs (graphics processing units) for that task.

The key to the efficiency of the GPU is that it has large numbers of tiny cores enabled to do all the necessary calculations at once. The CPU, on the other hand, typically has two to eight large cores, and the necessary calculations must wait in line for processing.

While GPUs do the job much faster than CPUs, Lu believes that “Memristor AI processors could be 10 to 100 times better” than today’s GPUs. The experimental-scale computer developed by Lu and his team has more than 5,800 memristors. which are, in essence, nuclei. A commercial design is expected to have millions of them.

The result is a calculation that takes place in memory.

Most of the mathematical processing takes place in the memristor, through the changing resistance values ​​produced by comparing the inputs of each memristor core to what is already stored there.

Thus, the memristor stores information and processes it in the same place. This removes the biggest bottleneck in calculating speed and power: the connection between memory and processor.

Building the Programmable Memristor Computer

It was necessary to integrate the set of memristors with the conventional CPU and other integrated elements, such as A / D / A converters and communication channels. To do so, Lu’s team worked with Professor Michael Flynn and Associate Professor Zhengya Zhang, both from UM’s department of electrical and computer engineering. Manufacturing was done at UM’s Lurie Nanofabrication Plant.

Wei Lu (right) and Seung Hwan Lee (left), an electrical engineering doctoral student who is the first author of the paper. Photo by Robert Coelius, Michigan Communications and Marketing Engineering

Testing the concept

The experimental scale computer was tested against three classical machine learning algorithms. Each algorithm was successfully executed on this prototype chip.

  • Perceptron, which determines whether two representations belong to a “class” or not. The device was shown to be capable of identifying imperfect Greek letters with 100% accuracy.
  • Sparse coding Compress and classify data as images. The experimental computer was able to find the most efficient way to reconstruct images in a set and identify patterns with 100% accuracy.
  • Two-layer neural network, which is designed to find patterns in complex data. In this test, the goal was to determine whether the screening data represented malignant or benign breast cancer. The success rate here was 94.6%.

If you think that true artificial intelligence is already here or if you think that the idea of ​​true artificial intelligence may be a bit of a stretch, the processing demands are growing every day. Recent U of M research is a look into a future where localized AI is possible, even for wearable devices.

Do you work with memristors? Tell us about your experiences and expertise in the comments below.