Supported by |
|
|
|
Additional cooperation from |
|
|
Sixth International Workshop on Symbolic-Neural Learning (SNL2022)
July 8-9, 2022 Venue: Toyota Technological Institute, Nagoya, Japan
Notice: The order of speakers on July 8th has been changed due to unavoidable circumstances.
SNL2022 will be held at the Toyota Technological Institute, Nagoya, Japan in person.
Symbolic-neural learning involves deep learning methods in combination with symbolic structures. A "deep learning method" is taken to be a learning process based on gradient descent on real-valued model parameters. A "symbolic structure" is a data structure involving symbols drawn from a large vocabulary; for example, sentences of natural language, parse trees over such sentences, databases (with entities viewed as symbols), and the symbolic expressions of mathematical logic or computer programs.
Symbolic-neural learning has an innovative feature that allows to model interactions between different modals: speech, vision, and language. Such multimodal information processing is crucial for realizing research outcomes in real-word.
Topics of interests include, but are not limited to, the following areas:
- Speech, vision, and natural language interactions in robotics
- Multimodal and grounded language processing
- Multimodal QA and translation
- Dialogue systems
- Language as a mechanism to structure and reason about visual perception
- Image caption generation and image generation from text
- General knowledge question answering
- Reading comprehension
- Textual entailment
Deep learning systems across these areas share various architectural ideas. These include word and phrase embeddings, self-attention neural networks, recurrent neural networks (LSTMs and GRUs), and various memory mechanisms. Certain linguistic and semantic resources may also be relevant across these applications. For example, dictionaries, thesauri, WordNet, FrameNet, FreeBase, DBPedia, parsers, named entity recognizers, coreference systems, knowledge graphs and encyclopedias.
The workshop consists of invited oral presentations and accepted posters.
Organizing Committee:
David McAllester |
Toyota Technological Institute at Chicago, Chicago, USA |
Tomoko Matsui |
The Institute of Statistical Mathematics, Tokyo, Japan |
Yutaka Sasaki (Chair) |
Toyota Technological Institute, Nagoya, Japan |
Koichi Shinoda |
Tokyo Institute of Technology, Tokyo, Japan |
Masashi Sugiyama |
RIKEN Center for AIP and the University of Tokyo, Tokyo, Japan |
Jun'ichi Tsujii |
AIST AI Research Center, Tokyo, Japan and
the University of Manchester, Manchester, UK |
Yasushi Yagi |
Osaka University, Osaka, Japan |
Program Committee:
Norimichi Ukita (Chair) |
Toyota Technological Institute, Nagoya, Japan |
Yuki Arase |
Osaka University, Osaka, Japan |
Rei Kawakami |
Tokyo Institute of Technology, Tokyo, Japan |
Yuji Matsumoto |
RIKEN Center for AIP and NAIST, Nara, Japan |
Makoto Miwa |
Toyota Technological Institute, Nagoya, Japan |
Daichi Mochihashi |
The Institute of Statistical Mathematics, Tokyo, Japan |
Bradly Stadie |
Toyota Technological Institute at Chicago, Chicago, USA |
Hiroya Takamura |
AIST AI Research Center Tokyo, Japan and Tokyo Institute of Technology, Tokyo, Japan |
Local Arrangements Committee
Past Workshops:
|