1、A Self-Supervised,Pre-Trained,and Cross-Stage-Aligned Circuit Encoder Provides a Foundation for Various Design Tasks1Wenji Fang1,Shang Liu1,Hongce Zhang1,2,Zhiyao Xie1wfang838connect.ust.hk1Hong Kong University of Science and Technology2Hong Kong University of Science and Technology(Guangzhou)2Outli
2、ne Background CircuitEncoder Framework Experimental Results Conclusion&Future WorkBackground4Background:AI for EDA Remarkable achievements Design quality evaluation Power,timing,area,routability,etc.Functionality reasoning Arithmetic word-level abstraction,SAT,etc.Optimization Design space explorati
3、on,etc.Generation RTL code,verification,etc.5Background:AI for EDA Most existing predictive solutions are task-specific Supervised methods:tedious and time-consuming Hard to generalize to other tasks6Background:Foundation Models AI foundation models Pretrain-finetune paradigm Pre-training on large a
4、mounts of unlabeled data(self-supervised)Fine-tuning based on task-specific labels(supervised)Applications Natural language processing:GPT,BERT,Llama,etc.Computer vision:DALLE,stable-diffusionCircuitEncoder Framework8Motivation:Towards Circuit Foundation Models Large circuit model9Motivation:Towards
5、 Circuit Foundation Model Our targeted circuit foundation model Capture unique circuit intrinsic property Cross-stage:RTL(functional)netlist(Physical)Equivalent transformation:semantic&structure Support various types of tasks Functionality:reasoning,verification,etc.Design quality:performance,power,
6、area,etc.10Key Idea:First RTL-Netlist Cross-Stage Alignment General circuit foundation model solution Self-supervised pre-trained:circuit graph function contrastive Cross-stage aligned:RTL(func.)netlist(phys.)alignment Support various design tasks:Lightweight downstream task model PPA+functionality1
7、1Comparison with Existing Solution Circuit representation learning Goal:to learn a general circuit embedding for various tasks Explorations Supervised:HOGA,Gamora,etc.Pre-trained:DeepGate Family,FGNN,SNS v2,etc.12Comparison with Existing Solution Circuit representation learning Limitations:still do
8、not provide perfectly general circuit representation Mainly support one type of task(phys.PPA or func.)Only target single stage(RTL or netlist)13Circuit Design Stages:RTL&Netlist RTL Earlier design stage Higher abstraction level More semantic content Task Predicting later netlist PPA Netlist Later d
9、esign stage Lower abstraction level More implementation details Task Reasoning earlier RTL function Predicting later layout PPA14Preprocessing:Circuit Data Alignment Circuit-to-graph transformation RTL-Netlist data alignment via backtrace register cone Advantages RTL-netlist cones are strictly align
10、ed&functionally equivalent Capture the entire state transition of each register Intermediate granularity better scalability15Encoding:Graph Learning Model for Circuits RTL graph Graph transformer Global positional encoding Netlist graph Graph neural network Neighbor aggregation Node-level embeddings
11、 Cone graph-level embeddings16CircuitEncoder Phase 1:Pre-Training Self-supervised pre-training:intrinsic circuit propertyIntra-stage contrastive learning Minimizing embed.distance between positive pairs(equiv.transform.)Maximizing embed.distance among negative pairs(func.diff.)17CircuitEncoder Phase
12、 1:Pre-Training Self-supervised pre-training:intrinsic circuit propertyInter-stage contrastive learning Cross-stage alignment between RTL and netlist embed.18CircuitEncoder Phase 2:Fine-Tuning for Tasks Supervised fine-tuning Lightweight task models:MLP,tree-based,etc.19CircuitEncoder Phase 2:Fine-T
13、uning for Tasks Downstream tasks Register cone-level:Timing slack prediction at RTL stage Register function(control/data)identification at netlist stage Design-level:Overall PPA prediction at RTL stage WNS TNS AreaExperimental Results21Circuit Design Statistics 41 open-source designs 7,166 RTL and n
14、etlist cone pairs Data augmentation 42,996 graphs in total22Experimental Setup Design flow RTL designs are synthesized using DC/NanGate 45nm Design PPA metrics are obtained from PT Circuit augmentation Yosys/ABC for functionally equivalent transformation Graph model RTL:Graphormer Netlist:GraphSage2
15、3Experimental Setup Model training24Task Evaluation and Baseline Methods Design quality evaluation regression metrics Register slack prediction at cone level RTL-Timer DAC24 RTL-stage overall quality evaluation at circuit level MasterRTL ICCAD23 SNS v2 MICRO23 Functional reasoning classification met
16、rics Netlist-stage state register classification at cone level ReIGNN ICCAD2125Results:Comparison with SOTA Solutions Outperforming each task-specific SOTA solution Cone-level tasks Few-shot learning 50%data for CircuitEncoder 100%data for supervised baselines26Results:Comparison with SOTA Solutions
17、 Outperforming each task-specific SOTA solution Circuit-level tasks27Results:Comparison with SOTA Solutions Fine-tuning data size scaling 12%25%50%Pre-trained CircuitEncoder remains stable28Ablation Study Impact of cross-stage alignment Impact of graph transformerConclusion&Future Work30Conclusion C
18、ircuitFusion Self-supervised&pre-trained Graph function contrastive learning Cross-stage alignment RTL function netlist physics Support various tasks Design quality:slack,WNS,TNS,area prediction Functional reasoning:state register identification31Future Work Advancing circuit foundation model Circui
19、t-specific self-supervised learning Multimodal circuit learning customized for each design stage RTL:AST/control-data flow graph,Verilog code,specification text Netlist:connectivity graph,annotated node text Layout:image,netlist graph Existing encoders and decoders work separately Encoder predictive task Decoder generative task Unified encoder-decoder?Thank You!Questions?