BERT Loses Patience: Fast and Robust Inference with Early Exit
Wangchunshu Zhou*,
Canwen Xu*,
Tao Ge,
Julian McAuley,
Ke Xu,
and Furu Wei
NeurIPS 2020
Acceptance rate: 1900/9454=20.1%
[arXiv]
BERT-of-Theseus: Compressing BERT by Progressive Module ReplacingCanwen Xu*,
Wangchunshu Zhou*,
Tao Ge,
Furu Wei,
and Ming Zhou
EMNLP 2020
Acceptance rate: 754/3359=22.4%
[PDF]
[arXiv]
[URL]
[Code]
HuggingFace’s Transformers: State-of-the-art Natural Language Processing
The Hugging Face Team
EMNLP 2020 (Demo)[Best demo paper award]
[PDF]
[arXiv]
[URL]
[Code]
MATINF: A Jointly Labeled Large-Scale Dataset for Classification, Question Answering and SummarizationCanwen Xu*,
Jiaxin Pei*,
Hongtao Wu,
Yiyu Liu,
and Chenliang Li
ACL 2020
Acceptance rate: 779/3429=22.7%
[PDF]
[arXiv]
[URL]
[Video]
[Code]
Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders
Yu Duan*,
Canwen Xu*,
Jiaxin Pei*,
Jialong Han,
and Chenliang Li
ACL 2020
Acceptance rate: 779/3429=22.7%
[PDF]
[arXiv]
[URL]
[Video]
[Code]
UnihanLM: Coarse-to-Fine Chinese-Japanese Language Model Pretraining with the Unihan DatabaseCanwen Xu,
Tao Ge,
Chenliang Li,
and Furu Wei
AACL-IJCNLP 2020
Acceptance rate: 106/375=28.3%
[URL]
2019
DLocRL: A Deep Learning Pipeline for Fine-Grained Location Recognition
and Linking in TweetsCanwen Xu,
Jing Li,
Xiangyang Luo,
Jiaxin Pei,
Chenliang Li,
and Donghong Ji
WWW 2019
Acceptance rate: 72/361=19.9%
[arXiv]
[URL]