Recently, circuit analysis and optimization featuring neural-network models have been proposed, reducing the computational time during optimization while keeping the accuracy of physics-based models. We present a novel approach for fast training of such neural-network models based on the sparse matrix concept. The new training technique does not require any structure change in neural networks, but makes use of the inherent nature of neural networks that for each pattern some neuron activations are close to zero, and hence, have no effect on network outputs and weights update. Much of the computation effort is saved over standard training techniques, while achieving the same accuracy. FET device and VLSI interconnect modeling examples verified the proposed technique.

Additional Metadata
Keywords Modeling, Neural network, Optimization, Simulation
Persistent URL dx.doi.org/10.1109/22.641714
Journal IEEE Transactions on Microwave Theory and Techniques
Citation
Zaabab, A.H. (A. Hafid), Zhang, Q.J, & Nakhla, M.S. (1997). Device and circuit-level modeling using neural networks with faster training based on network sparsity. IEEE Transactions on Microwave Theory and Techniques, 45(10 PART 1), 1696–1704. doi:10.1109/22.641714