Best News Network

Echo state graph neural networks with analogue random resistive memory arrays

Marring resistive memory with graph learning
Hardware–software co-design of random resistive memory-based ESGNN for graph learning. a, A cross-sectional transmission electron micrograph of a single resistive memory cell that works as a random resistor after dielectric breakdown. Scale bar 20 nm. b, A cross-sectional transmission electron micrograph of the resistive memory crossbar array fabricated using the backend-of-line process on a 40 nm technology node tape-out. Scale bar 500 nm. c, A schematic illustration of the partition of the random resistive memory crossbar array, where cells shadowed in blue are the weights of the recursive matrix (passing messages along edges) while those in red are the weights of the input matrix (transforming node input features). d, The corresponding conductance map of the two random resistor arrays in c. e, The conductance distribution of the random resistive memory arrays. f, The node embedding procedure of the proposed ESGNN. The internal state of each node at the next time step is co-determined by the sum of neighboring contributions (blue arrows indicate multiplications between node internal state vectors and the recursive matrix in d), the input feature of the node after a random projection (red arrows indicate multiplications between input node feature vectors with the input matrix in d) and the node internal state in the previous time step. g, The graph embedding based on node embeddings. The graph embedding vector g is the sum pooling of all the node internal state vectors in the last time step. Credit: Nature Machine Intelligence (2023). DOI: 10.1038/s42256-023-00609-5

Graph neural networks have been widely used for studying social networks, e-commerce, drug predictions, human-computer interaction, and more.

In a new study published in Nature Machine Intelligence as the cover story, researchers from Institute of Microelectronics of the Chinese Academy of Sciences (IMECAS) and the University of Hong Kong have accelerated graph learning with random resistive memory (RRM), achieving 40.37X improvements in energy efficiency over a graphics processing unit on representative graph learning tasks.

Deep learning with graphs on traditional von Neumann computers leads to frequent data shuttling, inevitably incurring long processing times and high energy use. In-memory computing with resistive memory may provide a novel solution.

The researchers presented a novel hardware–software co-design, the RRM-based echo state graph neural network, to address those challenges.

The RRM not only harnesses low-cost, nanoscale and stackable resistors for highly efficient in-memory computing, but also leverages the intrinsic stochasticity of dielectric breakdown to implement random projections in hardware for an echo state network that effectively minimizes the training cost.

The work is significant for developing next-generation AI hardware systems.

More information:
Shaocong Wang et al, Echo state graph neural networks with analogue random resistive memory arrays, Nature Machine Intelligence (2023). DOI: 10.1038/s42256-023-00609-5

Provided by
Chinese Academy of Sciences


Citation:
Echo state graph neural networks with analogue random resistive memory arrays (2023, March 1)
retrieved 1 March 2023
from https://techxplore.com/news/2023-03-echo-state-graph-neural-networks.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.