NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
Peiyi Wang
Number of Papers:- 20
Number of Citations:- 0
First ACL Paper:- 2022
Latest ACL Paper:- 2024
Venues:-
s
EMNLP
i
d
NAACL
-
A
Findings
L
P
ACL
E
C
M
N
F
n
g
Co-Authors:-
Baobao Chang
Benyou Wang
Binghuai Lin
Bofei Gao
Damai Dai
Similar Authors:-
2024
2023
2022
Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding
F
i
n
d
i
n
g
s
-
A
C
L
Heming Xia |
Zhe Yang |
Qingxiu Dong |
Peiyi Wang |
Yongqi Li |
Tao Ge |
Tianyu Liu |
Wenjie Li |
Zhifang Sui |
Large Language Models are not Fair Evaluators
ACL
Peiyi Wang |
Lei Li |
Liang Chen |
Zefan Cai |
Dawei Zhu |
Binghuai Lin |
Yunbo Cao |
Lingpeng Kong |
Qi Liu |
Tianyu Liu |
Zhifang Sui |
PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
F
i
n
d
i
n
g
s
-
A
C
L
Liang Chen |
Yichi Zhang |
Shuhuai Ren |
Haozhe Zhao |
Zefan Cai |
Yuchi Wang |
Peiyi Wang |
Xiangdi Meng |
Tianyu Liu |
Baobao Chang |
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations
ACL
Peiyi Wang |
Lei Li |
Zhihong Shao |
Runxin Xu |
Damai Dai |
Yifei Li |
Deli Chen |
Yu Wu |
Zhifang Sui |
Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models
ACL
Lei Li |
Yuqi Wang |
Runxin Xu |
Peiyi Wang |
Xiachong Feng |
Lingpeng Kong |
Qi Liu |
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment
EMNLP
Lei Li |
Zhihui Xie |
Mukai Li |
Shunian Chen |
Peiyi Wang |
Liang Chen |
Yazheng Yang |
Benyou Wang |
Lingpeng Kong |
Qi Liu |
Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming
F
i
n
d
i
n
g
s
-
E
M
N
L
P
Rui Li |
Peiyi Wang |
Jingyuan Ma |
Di Zhang |
Lei Sha |
Zhifang Sui |
Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation
F
i
n
d
i
n
g
s
-
E
M
N
L
P
Heming Xia |
Tao Ge |
Peiyi Wang |
Si-Qing Chen |
Furu Wei |
Zhifang Sui |
InfoCL: Alleviating Catastrophic Forgetting in Continual Text Classification from An Information Theoretic Perspective
F
i
n
d
i
n
g
s
-
E
M
N
L
P
Yifan Song |
Peiyi Wang |
Weimin Xiong |
Dawei Zhu |
Tianyu Liu |
Zhifang Sui |
Sujian Li |
Rationale-Enhanced Language Models are Better Continual Relation Learners
EMNLP
Weimin Xiong |
Yifan Song |
Peiyi Wang |
Sujian Li |
Enhancing Continual Relation Extraction via Classifier Decomposition
F
i
n
d
i
n
g
s
-
A
C
L
Heming Xia |
Peiyi Wang |
Tianyu Liu |
Binghuai Lin |
Yunbo Cao |
Zhifang Sui |
Guiding AMR Parsing with Reverse Graph Linearization
F
i
n
d
i
n
g
s
-
E
M
N
L
P
Bofei Gao |
Liang Chen |
Peiyi Wang |
Zhifang Sui |
Baobao Chang |
Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning
F
i
n
d
i
n
g
s
-
E
M
N
L
P
Zhe Yang |
Damai Dai |
Peiyi Wang |
Zhifang Sui |
Incorporating Hierarchy into Text Encoder: a Contrastive Learning Approach for Hierarchical Text Classification
ACL
Zihan Wang |
Peiyi Wang |
Lianzhe Huang |
Xin Sun |
Houfeng Wang |
Hierarchical Curriculum Learning for AMR Parsing
ACL
Peiyi Wang |
Liang Chen |
Tianyu Liu |
Damai Dai |
Yunbo Cao |
Baobao Chang |
Zhifang Sui |
An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling
NAACL
Peiyi Wang |
Runxin Xu |
Tianyu Liu |
Qingyu Zhou |
Yunbo Cao |
Baobao Chang |
Zhifang Sui |
A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
NAACL
Runxin Xu |
Peiyi Wang |
Tianyu Liu |
Shuang Zeng |
Baobao Chang |
Zhifang Sui |
ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Findings
NAACL
Liang Chen |
Peiyi Wang |
Runxin Xu |
Tianyu Liu |
Zhifang Sui |
Baobao Chang |
Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation
EMNLP
Peiyi Wang |
Yifan Song |
Tianyu Liu |
Binghuai Lin |
Yunbo Cao |
Sujian Li |
Zhifang Sui |
HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification
EMNLP
Zihan Wang |
Peiyi Wang |
Tianyu Liu |
Binghuai Lin |
Yunbo Cao |
Zhifang Sui |
Houfeng Wang |
.