On June 22, 2021, Hongseok Yang (양홍석) from KAIST gave a talk at the Discrete Math Seminar on introducing DAG-symmetries and characterizing linear layers of neural networks preserving these symmetries. The title of his talk was “DAG-symmetries and Symmetry-Preserving Neural Networks“.
The preservation of symmetry is one of the key tools for designing data-efficient neural networks. A representative example is convolutional neural networks (CNNs); they preserve translation symmetries, and this symmetry preservation is often attributed to their success in real-world applications. In the machine-learning community, there is a growing body of work that explores a new type of symmetries, both discrete and continuous, and studies neural networks that preserve those symmetries.
In this talk, I will explain what I call DAG-symmetries and our preliminary results on the shape of neural networks that preserve these symmetries. DAG-symmetries are finite variants of DAG-exchangeability developed by Jung, Lee, Staton, and Yang (2020) in the context of probabilistic symmetries. Using these symmetries, we can express that when a neural network works on, for instance, sets of bipartite graphs whose edges are labelled with reals, the network depends on neither the order of elements in the set nor the identities of vertices of the graphs. I will explain how a group of specific DAG-symmetries is constructed by applying a form of wreath product over a given finite DAG. Then, I will explain what linear layers of neural networks preserving these symmetries should look like.
This is joint work with Dongwoo Oh.