Mathematical Sciences Colloquium Series, Thursday, November 14, 2024

November 06, 2024

We hope you will join us as Daniel Haider presents his research for the Department of Mathematical Sciences Colloquium Series.

"Injectivity and Stability of ReLU-layers: Perspectives from Frame Theory"

November 14, 1:00 PM, RB 452

Non-linearities frequently arise in applications, either due to technical constraints or as intentional elements of model design. A prominent instance of the latter is to use the composition of affine linear mappings and non-linear activation functions as layers of artificial neural networks. Among many layer designs, ReLU-layers - i.e., layers using ReLU}(t) = max(0,t) as activation - are the most widely used layer types due to their simplicity and effectiveness. By performing hard thresholding, the ReLU function naturally acts as a sparsifier, where a black-box machinery determines which and how much information of the input is suppressed. Assessing whether the original input can be reconstructed from the output is therefore crucial for improving the interpretability and functionality of the associated models. This makes the injectivity analysis of ReLU-layers both, a challenging and intriguing problem that has not yet been fully solved.

In this talk, we present a frame theoretic perspective to approach the problem. The main objective is to develop the most general characterization of the injectivity of ReLU-layers in terms of all three involved components: (i) the frame that determines the linear mapping, (ii) the bias that acts as affine offset, and (iii) the input domain where the data is drawn from. From (ii) we can derive a practical, numerically implementable methodology for studying information loss in ReLU-layers. Finally, for injective ReLU-layers, we derive explicit reconstruction formulas based on the duality concept from frame theory and establish novel stability bounds for the recovery map.

This is joint work with M. Ehler, D. Freeman, and P. Balazs.

Share article to: