Wednesday, November 30 2022

[ad_1]

When two technologies converge, they can create something new and wonderful – like cell phones and browsers merged to form smartphones.

Today, developers are applying AI’s ability to find patterns in massive graph databases that store information about relationships between data points of all kinds. Together they produce a powerful new tool called graphical neural networks.

What are graphical neural networks?

Graph neural networks apply the predictive power of deep learning to rich data structures that describe objects and their relationships as points connected by lines in a graph.

In GNNs, data points are called nodes, which are connected by lines – called edges – with elements expressed mathematically so that machine learning algorithms can make useful predictions at the level of nodes, edges or graphs whole.

What can GNNs do?

A growing list of companies are applying GNNs to improve drug discovery, fraud detection, and recommendation systems. These and many other applications rely on finding patterns in the relationships between data points.

Researchers are exploring use cases for GNNs in computer graphics, cybersecurity, genomics, and materials science. A recent article reported how GNNs used transit maps as graphics to improve time-of-arrival predictions.

Many branches of science and industry already store valuable data in graph databases. Using deep learning, they can train predictive models that surface new information from their graphs.

Knowledge from many areas of science and industry can be expressed graphically.

“GNNs are one of the hottest areas of deep learning research, and we see an increasing number of applications taking advantage of GNNs to improve their performance,” said researcher George Karypis. principal at AWS, in a conversation earlier this year.

Others agree. GNNs “catch fire because of their flexibility to model complex relationships, which traditional neural networks cannot,” said Jure Leskovec, an associate professor at Stanford, speaking in a recent discussionwhere he showed the table below of AI articles that mention them.

Recent articles on graphical neural networks

Who Uses Graph Neural Networks?

Amazon reported in 2017 on his work using GNNs to detect fraud. In 2020, it rolled out an audience GNN Service that others might use for fraud detection, recommendation systems, and other applications.

To maintain high levels of customer trust, Amazon Search uses GNNs to detect malicious sellers, buyers, and products. Thanks to NVIDIA GPUs, it is able to explore graphics with tens of millions of nodes and hundreds of millions of edges while reducing training time from 24 to five hours.

For its part, biopharmaceutical company GSK maintains a knowledge graph with nearly 500 billion nodes that is used in many of its machine language models, said Kim Branson, the company’s global head of AI. expressing on a sign at a GNN workshop.

LinkedIn uses GNNs to make social recommendations and understand the relationships between people’s skills and their job titles, said Jaewon Yang, the company’s senior software engineer, speaking on another panel at the workshop.

“GNNs are general-purpose tools, and every year we discover a bunch of new applications for them,” said Joe Eaton, a prominent engineer at NVIDIA who leads a team that postulates accelerated calculation to GNNs. “We haven’t even scratched the surface of what GNNs can do.”

In another sign of interest in GNNs, videos of a course on them that Leskovec teaches at Stanford received more than 700,000 views.

How do GNNs work?

Until now, deep learning has mainly focused on images and text, types of structured data that can be described as sequences of words or grids of pixels. Graphics, on the other hand, are unstructured. They can take any shape or size and contain any type of data, including images and text.

Using a process called message passing, GNNs organize graphs so that machine learning algorithms can use them.

Message transmission embeds information about its neighbors into each node. AI models use embedded information to find patterns and make predictions.

Transmission of messages in GNNs
Examples of data flow in three types of GNN.

For example, recommender systems use a form of node integration in GNNs to match customers to products. Fraud detection systems use edge embeddings to find suspicious transactions, and drug discovery models compare whole graphs of molecules to find out how they react to each other.

GNNs are unique in two other ways: they use sparse computations, and the models typically have only two or three layers. Other AI models typically use dense math and have hundreds of layers of neural networks.

Example pipeline for a graph neural network
A GNN pipeline has an input graph and output predictions.

What is the history of GNNs?

An article from 2009 of Italian researchers was the first to give their name to graph neural networks. But it took eight years before two Amsterdam researchers demonstrated their power with a variant they called a convolutional network of graphs (GCN), which is one of the most popular GNNs today.

GCN’s work inspired Leskovec and two of his Stanford graduate students to create GraphicSage, a GNN that showed new ways the message-passing function could work. He put it to the test in the summer of 2017 at Pinterest, where he served as chief scientist.

The GraphSage Graph Neural Network
GraphSage pioneered powerful aggregation techniques for message passing in GNNs.

Their implementation, PinSagewas a recommender system that aggregated 3 billion nodes and 18 billion edges to outperform other AI models at the time.

Pinterest now applies it to more than 100 use cases across the company. “Without GNNs, Pinterest wouldn’t be as engaging as it is today,” said Andrew Zhai, the company’s lead machine learning engineer. an online panel.

Meanwhile, other variations and hybrids have emerged, including graph recurrent networks and graph attention networks. GATs borrow the attention mechanism defined in transformer models to help GNNs focus on the parts of the datasets that are of most interest.

Variations of Graph Neural Networks
An overview of GNNs represented a family tree of their variants.

Scaling graphical neural networks

In the future, GNNs must evolve in all dimensions.

Organizations that do not yet manage graph databases need tools to facilitate the creation of these complex data structures.

Those who use graph databases know that in some cases they grow to have thousands of features integrated on a single node or edge. This presents challenges to efficiently load the huge datasets from storage subsystems through networks to processors.

“We offer products that maximize the memory, compute bandwidth and throughput of accelerated systems to address these data loading and scaling issues,” Eaton said.

As part of this work, NVIDIA announced at the GTC that it now supports PyTorch Geometric (PyG) in addition to the Deep Graph Library (DGL). These are two of the most popular GNN software frameworks.

NVIDIA tools for creating graphical neural networks
NVIDIA provides several tools to speed up GNN creation.

DGL and PyG containers powered by NVIDIA are performance optimized and tested for NVIDIA GPUs. They provide an easy place to start developing applications using GNNs.

To learn more, watch a conversation on Accelerating and Scaling GNNs with DGL and GPU by Da Zheng, Senior Applied Scientist at AWS. Additionally, NVIDIA engineers held separate discussions on GNN acceleration with DGL and PyG.

Get started today, join our early access program to DGL and PyG.

[ad_2]
Source link

Previous

Chicest Dhamakedar and Icebreaker Tips to Light Up Your Love Life This Festive Season

Next

Now even Barack Obama is talking about Pete Davidson's love life

Check Also