|
July 9 (Saturday)
(*online) Poster presentations(P01) Chihiro Nakatani, Hiroaki Kawashima, Norimichi Ukita, Configuration- and Action-aware Joint Attention Estimation (P02) Takuma Yoneda, Ge Yang, Matthew R. Walter, Bradly Stadie, Invariance Through Latent Alignment (P03) Yuki Kondo, Norimichi Ukita, Joint Learning of Blind Super-Resolution and Crack Segmentation for Degraded Images (P04) Bradly C. Stadie, Lunjun Zhang, Ge Yang, World Model as a Graph: Learning Latent Landmarks for Planning (P05) Shanshan Liu, Yuji Matsumoto, A simple method for End-to-End Relation Extraction (P06) Takahiro Maeda, Norimichi Ukita, MotionAug: Augmentation with Physical Correction for Human Motion Prediction (P07) Takeru Oba, Norimichi Ukita, Future-guided imitation learning for improving recurrent training (P08) Kohei Makino, Makoto Miwa, Yutaka Sasaki, A sequential edge editor that considers relationships between relations for document-level relation extraction (P09) Machel Reid, Edison Marrese Taylor, Yutaka Matsuo, Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers (P10) Cristian Rodriguez-Opazo, Edison Marrese-Taylor, Basura Fernando, Hiroya Takamura, Qi Wu, Stochastic Bucket-wise Feature Sampling For Memory Efficient Moment Localization in Long Videos (P11) Siti Oryza Khairunnisa, Zhousi Chen, Mamoru Komachi, A Study on Cross-Lingual Transfer for Named Entity Recognition in the Indonesian Language (P12) Ryuki Ida, Makoto Miwa, and Yutaka Sasaki, Text Classification using a Document Graph with Nodes Initialized with Textual Information (P13) Zhousi Chen, Mamoru Komachi, Discontinuous Constituency Parsing and Beyond (P14) Nallappan Gunasekaran, Masaki Asada, Makoto Miwa, Heterogeneous Graph Representation Learning for Predicting Drug-Drug Interactions (P15) Takashi Wada, Timothy Baldwin, Yuji Matsumoto, Jey Han Lau, Extracting Multi-Sense Word Embeddings from Pre-Trained Language Models For Unsupervised Lexical Substitution (P16) Mohammad Golam Sohrab, Matiss Rikters, Makoto Miwa, Pre-trained Sequence-to-Sequence models with BERT Non-Autoregressive Autoencoder (P17) Masaki Asada, Makoto Miwa, Yutaka Sasaki, Recent developments on neural Drug-Drug Interaction extraction from the literature (P18) Koji Watanabe, Katsumi Inoue, Learning State Transition Rules from Hidden Layers of Restricted Boltzmann Machines (P19) Kazutoshi Akita, Norimichi Ukita, Context-aware Region-dependent Scale Proposals for Object Detection using Super-Resolution (P20) Tomoki Tsujimura, Makoto Miwa, Yutaka Sasaki, Concept-Level Relation Extraction over Linked Entities |