|
USA-OH-PORTSMOUTH Azienda Directories
|
Azienda News:
- OPTIMAL ROBUST MEMORIZATION WITH RELU NEURAL NETWORKS
Memorization with neural networks is to study the expressive power of neural networks to interpolate a finite classification data set, which is closely related to the generalizability of deep learning However, the important problem of robust memorization has not been thoroughly studied
- OPTIMAL ROBUST MEMORIZATION WITH RELU NEURAL NETWORKS - ICLR
Second, we explicitly construct neural networks with O(N n) parameters for optimal robust memorization of any data set with dimension n and size N We also give a lower bound for the width of networks to achieve optimal robust memorization
- On the Optimal Memorization Power of ReLU Neural Networks
We study the memorization power of feedforward ReLU neural networks We show that such networks can memorize any N points that satisfy a mild separability assumption using O~(N−−√) parameters Known VC-dimension upper bounds imply that memorizing N samples requires Ω(N−−√) parameters, and hence our construction is optimal up to logarithmic factors
- Optimal Robust Memorization with ReLU Neural Networks
Lijia Yu, Xiao-Shan Gao, Lijun Zhang: Optimal Robust Memorization with ReLU Neural Networks In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 , 2024
- Small ReLU networks are powerful memorizers: - NeurIPS
We study finite sample expressivity, i e , memorization power of ReLU networks Recent results require Nhidden nodes to memorize interpolate arbitrary Ndata points In contrast, by exploiting depth, we show that 3-layer ReLU networks with (p N)hidden nodes can perfectly memorize most datasets with Npoints We also prove that width (p
- arXiv:2110. 03187v1 [cs. LG] 7 Oct 2021
showed that neural networks with sigmoidal or ReLU activations can memorize N points in Rd separated by a normalized distance of δ, using O N2 3 +log(1 δ) parameters (where the dimension d is constant) Thus, in this work we improve the dependence on N from N2 3 to √ N (up to logarithmic factors), which is optimal
- Optimal robust Memorization with ReLU Neural Networks. - dblp
Bibliographic details on Optimal robust Memorization with ReLU Neural Networks
- OPTIMAL ROBUST MEMORIZATION WITH RELU NEURAL NETWORKS - OpenReview
Memorization with neural networks is to study the expressive power of neural networks to interpolate a finite classification dataset, which is closely related to the generalizability of deep learning
- On the Optimal Memorization Power of ReLU Neural Networks
Optimal memorization capacity Theorem Let (x 1;y 1);:::;(x n;y n) 2Rd f 1;:::;Cgwhere d is constant, kx ik r for every i, and kx i x jk for every i 6= j Then, there exists a ReLU network F : Rd!R with O~ p n parameters, such that F(x i) = y i for every i 2[n] O~( ) hides log factors in n;C;r; 1 Matches the (p n) lower bound (up to log factors)
- Small ReLU networks are powerful memorizers: a tight analysis of . . .
ImageNet ( N = 1M, 1k classes) can be memorized with 4-layer ReLU networks with hidden layer size 2k-2k-4k • Overparametrized NNs trained with SGD can memorize even random noise [Zhang et al , 2017] To understand memorization phenomenon, it is important to understand NN’s expressive power
|
|