NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
Ali Ghodsi
Number of Papers:- 16
Number of Citations:- 0
First ACL Paper:- 2021
Latest ACL Paper:- 2024
Venues:-
NAACL
L
P
COLING
s
EMNLP
-
Findings
WNUT
E
C
M
g
A
d
ACL
EACL
i
N
F
n
Co-Authors:-
Abbas Ghaddar
Ahmad Rashid
Alan Do Omri
Ali Saheb Pasand
Aref Jafari
Similar Authors:-
2024
2023
2022
2021
Efficient Citer: Tuning Large Language Models for Enhanced Answer Quality and Verification
F
i
n
d
i
n
g
s
-
N
A
A
C
L
Marzieh Tahaei |
Aref Jafari |
Ahmad Rashid |
David Alfonso-Hermelo |
Khalil Bibi |
Yimeng Wu |
Ali Ghodsi |
Boxing Chen |
Mehdi Rezagholizadeh |
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference
F
i
n
d
i
n
g
s
-
E
A
C
L
Parsa Kavehzadeh |
Mojtaba Valipour |
Marzieh Tahaei |
Ali Ghodsi |
Boxing Chen |
Mehdi Rezagholizadeh |
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
EMNLP
Hossein Rajabzadeh |
Mojtaba Valipour |
Tianshu Zhu |
Marzieh S. Tahaei |
Hyock Ju Kwon |
Ali Ghodsi |
Boxing Chen |
Mehdi Rezagholizadeh |
DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
EACL
Mojtaba Valipour |
Mehdi Rezagholizadeh |
Ivan Kobyzev |
Ali Ghodsi |
Do we need Label Regularization to Fine-tune Pre-trained Language Models?
EACL
Ivan Kobyzev |
Aref Jafari |
Mehdi Rezagholizadeh |
Tianda Li |
Alan Do-omri |
Peng Lu |
Pascal Poupart |
Ali Ghodsi |
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
ACL
Findings
Ehsan Kamalloo |
Mehdi Rezagholizadeh |
Ali Ghodsi |
KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation
NAACL
Marzieh Tahaei |
Ella Charlaix |
Vahid Nia |
Ali Ghodsi |
Mehdi Rezagholizadeh |
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
COLING
Mehdi Rezagholizadeh |
Aref Jafari |
Puneeth S.M. Saladi |
Pranav Sharma |
Ali Saheb Pasand |
Ali Ghodsi |
Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
F
i
n
d
i
n
g
s
-
E
M
N
L
P
Peng Lu |
Ivan Kobyzev |
Mehdi Rezagholizadeh |
Ahmad Rashid |
Ali Ghodsi |
Phillippe Langlais |
Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
F
i
n
d
i
n
g
s
-
E
M
N
L
P
Aref Jafari |
Ivan Kobyzev |
Mehdi Rezagholizadeh |
Pascal Poupart |
Ali Ghodsi |
Annealing Knowledge Distillation
EACL
Aref Jafari |
Mehdi Rezagholizadeh |
Pranav Sharma |
Ali Ghodsi |
RW-KD: Sample-wise Loss Terms Re-Weighting for Knowledge Distillation
EMNLP
Findings
Peng Lu |
Abbas Ghaddar |
Ahmad Rashid |
Mehdi Rezagholizadeh |
Ali Ghodsi |
Philippe Langlais |
Universal-KD: Attention-based Output-Grounded Intermediate Layer Knowledge Distillation
EMNLP
Yimeng Wu |
Mehdi Rezagholizadeh |
Abbas Ghaddar |
Md Akmal Haidar |
Ali Ghodsi |
Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax
Findings
Ehsan Kamalloo |
Mehdi Rezagholizadeh |
Peyman Passban |
Ali Ghodsi |
How to Select One Among All ? An Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding
EMNLP
Findings
Tianda Li |
Ahmad Rashid |
Aref Jafari |
Pranav Sharma |
Ali Ghodsi |
Mehdi Rezagholizadeh |
Knowledge Distillation with Noisy Labels for Natural Language Understanding
EMNLP
WNUT
Shivendra Bhardwaj |
Abbas Ghaddar |
Ahmad Rashid |
Khalil Bibi |
Chengyang Li |
Ali Ghodsi |
Phillippe Langlais |
Mehdi Rezagholizadeh |
.