Loading...
We refer to cuGraph and other CUDA implementations to implement common graph analysis algorithms based on the WebGPU capabilities behind g-plugin-gpgpu to achieve large-scale node edge data volume.
This is a significant improvement over the CPU serial version currently offered by G6.
Algorithm name | Node / Edge | CPU time consumption | GPU time consumption | Speed up |
---|---|---|---|---|
SSSP | 1k Nodes & 5k Edges | 27687.10 ms | 261.60 ms | ~100x |
PageRank | 1k Nodes & 500k Edges | 13641.50 ms | 130.20 ms | ~100x |
Before using it, you need to confirm the operating environment and data as two preconditions.
WebGPU is currently (2022-3-21) supported in Chrome 94 official version and above, but since we are using the latest WGSL syntax, it is recommended to update your browser to the latest version.
For production use at this time, Origin Trial will need to be enabled to support WebGPU features (no longer required for Chrome 100+).
<meta>
tag to the page with the Token obtained in the previous step, e.g. via the DOM API.const tokenElement = document.createElement('meta');tokenElement.httpEquiv = 'origin-trial';tokenElement.content = 'AkIL...5fQ==';document.head.appendChild(tokenElement);
We use G6's graph data format, which is also the first fixed of all the following algorithms parameters.
const data = {// 点集nodes: [{id: 'node1', // String,该节点存在则必须,节点的唯一标识x: 100, // Number,可选,节点位置的 x 值y: 200, // Number,可选,节点位置的 y 值},{id: 'node2', // String,该节点存在则必须,节点的唯一标识x: 300, // Number,可选,节点位置的 x 值y: 200, // Number,可选,节点位置的 y 值},],// 边集edges: [{source: 'node1', // String,必须,起始点 idtarget: 'node2', // String,必须,目标点 id},],};
If the data format does not meet the above requirements, the algorithm will not execute properly.
We offer the following two ways to use it.
A WebGPUGraph is created and a series of initialization work such as canvas creation and plugin registration is done internally. Once completed, the algorithm is called directly.
import { WebGPUGraph } from '@antv/webgpu-graph';const graph = new WebGPUGraph();(async () => {const result = await graph.pageRank(data);})();
If you are already using G's Canvas for rendering, you can reuse it and do the following.
import { Canvas } from '@antv/g';import { Renderer } from '@antv/g-webgl';import { Plugin } from '@antv/g-plugin-gpgpu';import { pageRank } from '@antv/webgpu-graph';const webglRenderer = new Renderer();webglRenderer.registerPlugin(new Plugin());const canvas = new Canvas({container: 'my-canvas-id',width: 1,height: 1,renderer: webglRenderer,});(async () => {// Wait for the canvas initialization to completeawait canvas.ready;// Get Device by Rendererconst plugin = renderer.getPlugin('device-renderer');const device = plugin.getDevice();// Call the algorithm, pass in Device and graph dataconst result = await pageRank(device, data);})();
All the following algorithms are called asynchronously.
The list of parameters is as follows.
name | type | isRequired | description |
---|---|---|---|
graphData | GraphData | true | |
epsilon | number | false | The precision value to determine if the PageRank score is stable, default value is 1e-05 . |
alpha | number | false | The dumping factor is the probability that a user will continue to access the node pointed to by a node at any given moment, with a default value of 0.85 . |
maxIteration | number | false | Number of iterations, default value is 1000 |
The return value is an array of results containing the id
and score
attributes, in the form [{ id: 'A', score: 0.38 }, { id: 'B', score: 0.32 }...]
.
The array elements have been sorted by score
from highest to lowest, so the first element represents the node with the highest importance.
Refer to the following CUDA version implementation.
It is used in the following way, example:
const result = await graph.pageRank(data);// [{id: 'B', score: 0.3902697265148163}, {}...]
There is a very significant improvement in larger point-side scenarios.
Algorithm name | Node / Edge | CPU time consumption | GPU time consumption | Speed up |
---|---|---|---|---|
PageRank | 1k Nodes & 500k Edges | 13641.50 ms | 130.20 ms | ~100x |
Single source shortest path, i.e., the shortest path from one node to all other nodes.
The list of parameters is as follows.
name | type | isRequired | description |
---|---|---|---|
graphData | GraphData | true | 图数据 |
source | number | false | 判断 PageRank 得分是否稳定的精度值,默认值为 1e-05 |
Refer to the following CUDA version implementations.
Accelerating large graph algorithms on the GPU using CUDA
https://github.com/divyanshu-talwar/Parallel-DFS
K-Core Decomposition with CUDA
https://github.com/adamantmc/CudaCosineSimilarity
https://github.com/divyanshu-talwar/Parallel-DFS
https://github.com/hamham240/cudaGraph/blob/main/src/algos/cudaCD.cu