What Can We Do to Improve Peer Review in NLP?

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
OriginalsprogEngelsk
TitelFindings of the Association for Computational Linguistics: EMNLP 2020
ForlagAssociation for Computational Linguistics
Publikationsdato2020
Sider1256-1262
DOI
StatusUdgivet - 2020
BegivenhedThe 2020 Conference on Empirical Methods in Natural Language Processing - online
Varighed: 16 nov. 202020 nov. 2020
http://2020.emnlp.org

Konference

KonferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Lokationonline
Periode16/11/202020/11/2020
Internetadresse

Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk


Ingen data tilgængelig

ID: 254996462