Hongseok Yang (양홍석), Learning Symmetric Rules with SATNet

SATNet is a differentiable constraint solver with a custom backpropagation algorithm, which can be used as a layer in a deep-learning system. It is a promising proposal for bridging deep learning and logical reasoning. In fact, SATNet has been successfully applied to learn, among others, the rules of a complex logical puzzle, such as Sudoku, just from input and output pairs where inputs are given as images. In this paper, we show how to improve the learning of SATNet by exploiting symmetries in the target rules of a given but unknown logical puzzle or more generally a logical formula. We present SymSATNet, a variant of SATNet that translates the given symmetries of the target rules to a condition on the parameters of SATNet and requires that the parameters should have a particular parametric form that guarantees the condition. The requirement dramatically reduces the number of parameters to learn for the rules with enough symmetries, and makes the parameter learning of SymSATNet much easier than that of SATNet. We also describe a technique for automatically discovering symmetries of the target rules from examples. Our experiments with Sudoku and Rubik’s cube show the substantial improvement of SymSATNet over the baseline SATNet.

This is joint work with Sangho Lim and Eungyeol Oh.

Welcome Prof. Hongseok Yang (양홍석) from KAIST, a new Visiting Research Fellow in the IBS Discrete Mathematics Group

The IBS Discrete Mathematics Group welcomes Prof. Hongseok Yang (양홍석) from the School of Computing, KAIST, Daejeon, Korea. He will visit the IBS Discrete Mathematics Group for 1 year from February 28, 2022 to February 24, 2023 during his sabbatical leave from KAIST. He received his Ph.D. from the University of Illinois at Urbana-Champaign in 2001 and was a University Lecturer, an Assistant Professor, and a Full Professor in the Department of Computer Science at the University of Oxford since May 2011. He moved to KAIST as a full professor in July 2017.

Hongseok Yang (양홍석), DAG-symmetries and Symmetry-Preserving Neural Networks

The preservation of symmetry is one of the key tools for designing data-efficient neural networks. A representative example is convolutional neural networks (CNNs); they preserve translation symmetries, and this symmetry preservation is often attributed to their success in real-world applications. In the machine-learning community, there is a growing body of work that explores a new type of symmetries, both discrete and continuous, and studies neural networks that preserve those symmetries.

In this talk, I will explain what I call DAG-symmetries and our preliminary results on the shape of neural networks that preserve these symmetries. DAG-symmetries are finite variants of DAG-exchangeability developed by Jung, Lee, Staton, and Yang (2020) in the context of probabilistic symmetries. Using these symmetries, we can express that when a neural network works on, for instance, sets of bipartite graphs whose edges are labelled with reals, the network depends on neither the order of elements in the set nor the identities of vertices of the graphs. I will explain how a group of specific DAG-symmetries is constructed by applying a form of wreath product over a given finite DAG. Then, I will explain what linear layers of neural networks preserving these symmetries should look like.

This is joint work with Dongwoo Oh.

IBS 이산수학그룹 Discrete Mathematics Group
기초과학연구원 수리및계산과학연구단 이산수학그룹
대전 유성구 엑스포로 55 (우) 34126
IBS Discrete Mathematics Group (DIMAG)
Institute for Basic Science (IBS)
55 Expo-ro Yuseong-gu Daejeon 34126 South Korea
E-mail: dimag@ibs.re.kr, Fax: +82-42-878-9209
Copyright © IBS 2018. All rights reserved.