Loading...
Provides WebGL 1/2 and WebGPU-based rendering capabilities, and also includes GPU-based pickup capabilities. All 2D base graphics provided by the built-in G Core package, while exposing the ability to extend other custom 2D/3D graphics.
The g-webgl
and g-webgpu
renderers are built in by default, so there is no need to introduce them manually.
import { Renderer as WebGLRenderer } from '@antv/g-webgl';const renderer = new WebGLRenderer();
It represents a GPU device (as opposed to a Host, which usually refers to a CPU) and provides a unified HAL hardware adaptation layer for WebGL 1/2 and WebGPU implementations. The WebGPU related API has been heavily referenced in the design of the related APIs.
Since device initialization may be asynchronous (e.g. adapter.requestDevice()
for WebGPU), two ways to obtain a Device are provided.
import { CanvasEvent } from '@antv/g';// Listening for canvas ready eventscanvas.addEventListener(CanvasEvent.READY, () => {// Get Device by Rendererconst plugin = renderer.getPlugin('device-renderer');const device = plugin.getDevice();});// Or wait for the canvas to be readyawait canvas.ready;// Get Device by Rendererconst plugin = renderer.getPlugin('device-renderer');const device = plugin.getDevice();
After acquiring a Device, you can use it to create a series of GPU-related resources, such as Buffer, Texture, etc.
Buffer represents a piece of memory used in GPU operations that can be specified at creation time to initialize the data and subsequently modify some of it. The data is stored in a linear layout. When you need to read the data on the CPU side (Host), you need to do it by Readback.
export interface Buffer {setSubData(dstByteOffset: number,src: ArrayBufferView,srcByteOffset?: number,byteLength?: number,): void;destroy(): void;}
The Buffer is created in the following way and needs to be specified.
interface Device {createBuffer(descriptor: BufferDescriptor): Buffer;}export interface BufferDescriptor {viewOrSize: ArrayBufferView | number;usage: BufferUsage;hint?: BufferFrequencyHint;}export enum BufferUsage {MAP_READ = 0x0001,MAP_WRITE = 0x0002,COPY_SRC = 0x0004,COPY_DST = 0x0008,INDEX = 0x0010,VERTEX = 0x0020,UNIFORM = 0x0040,STORAGE = 0x0080,INDIRECT = 0x0100,QUERY_RESOLVE = 0x0200,}export enum BufferFrequencyHint {Static = 0x01,Dynamic = 0x02,}
For example, when used with g-plugin-gpgpu, to allocate input and output Buffer.
const buffer = device.createBuffer({usage: BufferUsage.STORAGE | BufferUsage.COPY_SRC,viewOrSize: new Float32Array([1, 2, 3, 4]),});
For example, to modify a variable in Uniform, which is located at the 20th bytes in the original Buffer.
paramBuffer.setSubData(5 * Float32Array.BYTES_PER_ELEMENT,new Float32Array([maxDisplace]),);
Free Buffer resources.
buffer.destroy();
Sometimes we need to read data from the GPU side (Device) Buffer or Texture on the CPU side (Host), and this is done with the Readback object, which provides asynchronous read methods.
interface Device {createReadback(): Readback;}
Reads the Buffer contents asynchronously.
The list of parameters is as follows.
The return value is the result of reading the ArrayBufferView.
export interface Readback {readBuffer(srcBuffer: Buffer,srcByteOffset?: number,dstBuffer?: ArrayBufferView,dstOffset?: number,length?: number,): Promise<ArrayBufferView>;}
For example, when used with g-plugin-gpgpu, reads the result of the calculation.
const result = await readback.readBuffer(resultBuffer); // Float32Array([...])
Reads the texture content.
The list of parameters is as follows.
The return value is the result of reading the ArrayBufferView.
export interface Readback {readTexture(t: Texture,x: number,y: number,width: number,height: number,dstBuffer: ArrayBufferView,dstOffset?: number,length?: number,): Promise<ArrayBufferView>;}
For example, when implementing GPU-based color-coded pickups.
const pickedColors = await readback.readTexture(this.pickingTexture,rect.x,rect.y,rect.width,rect.height,new Uint8Array(rect.width * rect.height * 4),);
Releases the Readback resource.
readback.destroy();
Textures are a very common GPU resource.
export interface Texture {setImageData(data: TexImageSource | ArrayBufferView[]): void;}
interface Device {createTexture(descriptor: TextureDescriptor): Texture;}export interface TextureDescriptor {dimension: TextureDimension;pixelFormat: Format;width: number;height: number;depth: number;numLevels: number;usage: TextureUsage;pixelStore?: Partial<{packAlignment: number,unpackAlignment: number,unpackFlipY: boolean,}>;}
For example, after loading the image successfully, set the texture content.
const image = new window.Image();image.onload = () => {// Set the texture content as Imagetexture.setImageData(image);};image.onerror = () => {};image.crossOrigin = 'Anonymous';image.src = src;
Frees the Texture resource.
texture.destroy();
interface Device {createSampler(descriptor: SamplerDescriptor): Sampler;}export interface SamplerDescriptor {wrapS: WrapMode;wrapT: WrapMode;wrapQ?: WrapMode;minFilter: TexFilterMode;magFilter: TexFilterMode;mipFilter: MipFilterMode;minLOD?: number;maxLOD?: number;maxAnisotropy?: number;compareMode?: CompareMode;}
Frees the Sampler resource.
sampler.destroy();
There are two ways to create.
interface Device {createRenderTarget(descriptor: RenderTargetDescriptor): RenderTarget;createRenderTargetFromTexture(texture: Texture): RenderTarget;}export interface RenderTargetDescriptor {pixelFormat: Format;width: number;height: number;sampleCount: number;texture?: Texture;}
Frees the RenderTarget resource.
renderTarget.destroy();
interface Device {createProgram(program: ProgramDescriptor): Program;}export interface ProgramDescriptor {vert?: string;frag?: string;preprocessedVert?: string;preprocessedFrag?: string;preprocessedCompute?: string;}
Frees the Program resource.
program.destroy();
Unlike g-plugin-canvas-picker and g-plugin-svg-picker, which are CPU-based picking schemes, we use A GPU-based approach called "color coding".
This approach consists of the following steps.