Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Detecting Harmful Content on Online Platforms : What Platforms Need vs. Where Research Efforts Go. / Arora, Arnav; Nakov, Preslav; Hardalov, Momchil; Sarwar, Sheikh Muhammad; Nayak, Vibha; Dinkov, Yoan; Zlatkova, Dimitrina; Dent, Kyle; Bhatawdekar, Ameya; Bouchard, Guillaume; Augenstein, Isabelle.

In: ACM Computing Surveys, Vol. 56, No. 3, 72, 2023, p. 1-17.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Arora, A, Nakov, P, Hardalov, M, Sarwar, SM, Nayak, V, Dinkov, Y, Zlatkova, D, Dent, K, Bhatawdekar, A, Bouchard, G & Augenstein, I 2023, 'Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go', ACM Computing Surveys, vol. 56, no. 3, 72, pp. 1-17. https://doi.org/10.1145/3603399

APA

Arora, A., Nakov, P., Hardalov, M., Sarwar, S. M., Nayak, V., Dinkov, Y., Zlatkova, D., Dent, K., Bhatawdekar, A., Bouchard, G., & Augenstein, I. (2023). Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go. ACM Computing Surveys, 56(3), 1-17. [72]. https://doi.org/10.1145/3603399

Vancouver

Arora A, Nakov P, Hardalov M, Sarwar SM, Nayak V, Dinkov Y et al. Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go. ACM Computing Surveys. 2023;56(3):1-17. 72. https://doi.org/10.1145/3603399

Author

Arora, Arnav ; Nakov, Preslav ; Hardalov, Momchil ; Sarwar, Sheikh Muhammad ; Nayak, Vibha ; Dinkov, Yoan ; Zlatkova, Dimitrina ; Dent, Kyle ; Bhatawdekar, Ameya ; Bouchard, Guillaume ; Augenstein, Isabelle. / Detecting Harmful Content on Online Platforms : What Platforms Need vs. Where Research Efforts Go. In: ACM Computing Surveys. 2023 ; Vol. 56, No. 3. pp. 1-17.

Bibtex

@article{2af80db510124808b22775134585c5dd,
title = "Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go",
abstract = "The proliferation of harmful content on online platforms is a major societal problem, which comes in many different forms, including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self-harm, and many others. Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users. Researchers have developed different methods for automatically detecting harmful content, often focusing on specific sub-problems or on narrow communities, as what is considered harmful often depends on the platform and on the context. We argue that there is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content. We thus survey existing methods as well as content moderation policies by online platforms in this light and suggest directions for future work.",
keywords = "Additional Key Words and PhrasesOnline harms, bullying and harassment, content moderation, graphic content, hate speech, misinformation, offensive language, self-harm, sexual abuse, spam, violence",
author = "Arnav Arora and Preslav Nakov and Momchil Hardalov and Sarwar, {Sheikh Muhammad} and Vibha Nayak and Yoan Dinkov and Dimitrina Zlatkova and Kyle Dent and Ameya Bhatawdekar and Guillaume Bouchard and Isabelle Augenstein",
note = "Publisher Copyright: {\textcopyright} 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.",
year = "2023",
doi = "10.1145/3603399",
language = "English",
volume = "56",
pages = "1--17",
journal = "ACM Computing Surveys",
issn = "0360-0300",
publisher = "Association for Computing Machinery, Inc.",
number = "3",

}

RIS

TY - JOUR

T1 - Detecting Harmful Content on Online Platforms

T2 - What Platforms Need vs. Where Research Efforts Go

AU - Arora, Arnav

AU - Nakov, Preslav

AU - Hardalov, Momchil

AU - Sarwar, Sheikh Muhammad

AU - Nayak, Vibha

AU - Dinkov, Yoan

AU - Zlatkova, Dimitrina

AU - Dent, Kyle

AU - Bhatawdekar, Ameya

AU - Bouchard, Guillaume

AU - Augenstein, Isabelle

N1 - Publisher Copyright: © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.

PY - 2023

Y1 - 2023

N2 - The proliferation of harmful content on online platforms is a major societal problem, which comes in many different forms, including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self-harm, and many others. Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users. Researchers have developed different methods for automatically detecting harmful content, often focusing on specific sub-problems or on narrow communities, as what is considered harmful often depends on the platform and on the context. We argue that there is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content. We thus survey existing methods as well as content moderation policies by online platforms in this light and suggest directions for future work.

AB - The proliferation of harmful content on online platforms is a major societal problem, which comes in many different forms, including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self-harm, and many others. Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users. Researchers have developed different methods for automatically detecting harmful content, often focusing on specific sub-problems or on narrow communities, as what is considered harmful often depends on the platform and on the context. We argue that there is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content. We thus survey existing methods as well as content moderation policies by online platforms in this light and suggest directions for future work.

KW - Additional Key Words and PhrasesOnline harms

KW - bullying and harassment

KW - content moderation

KW - graphic content

KW - hate speech

KW - misinformation

KW - offensive language

KW - self-harm

KW - sexual abuse

KW - spam

KW - violence

U2 - 10.1145/3603399

DO - 10.1145/3603399

M3 - Journal article

AN - SCOPUS:85176785424

VL - 56

SP - 1

EP - 17

JO - ACM Computing Surveys

JF - ACM Computing Surveys

SN - 0360-0300

IS - 3

M1 - 72

ER -

ID: 381228308