On the Opacity of Deep Neural Networks

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

On the Opacity of Deep Neural Networks. / Søgaard, Anders.

In: Canadian Journal of Philosophy, 2024.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Søgaard, A 2024, 'On the Opacity of Deep Neural Networks', Canadian Journal of Philosophy. https://doi.org/10.1017/can.2024.1

APA

Søgaard, A. (2024). On the Opacity of Deep Neural Networks. Canadian Journal of Philosophy. https://doi.org/10.1017/can.2024.1

Vancouver

Søgaard A. On the Opacity of Deep Neural Networks. Canadian Journal of Philosophy. 2024. https://doi.org/10.1017/can.2024.1

Author

Søgaard, Anders. / On the Opacity of Deep Neural Networks. In: Canadian Journal of Philosophy. 2024.

Bibtex

@article{e0c495a8177240f0b7dc4cd6c821758a,
title = "On the Opacity of Deep Neural Networks",
abstract = "Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what extent the two kinds of opacity can be mitigated by explainability methods.",
keywords = "deep neural networks, explainability, mitigation, model size, opacity",
author = "Anders S{\o}gaard",
note = "Publisher Copyright: {\textcopyright} The Author(s), 2024. Published by Cambridge University Press on behalf of The Canadian Journal of Philosophy Inc.",
year = "2024",
doi = "10.1017/can.2024.1",
language = "English",
journal = "Canadian Journal of Philosophy",
issn = "0045-5091",
publisher = "Cambridge University Press",

}

RIS

TY - JOUR

T1 - On the Opacity of Deep Neural Networks

AU - Søgaard, Anders

N1 - Publisher Copyright: © The Author(s), 2024. Published by Cambridge University Press on behalf of The Canadian Journal of Philosophy Inc.

PY - 2024

Y1 - 2024

N2 - Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what extent the two kinds of opacity can be mitigated by explainability methods.

AB - Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what extent the two kinds of opacity can be mitigated by explainability methods.

KW - deep neural networks

KW - explainability

KW - mitigation

KW - model size

KW - opacity

U2 - 10.1017/can.2024.1

DO - 10.1017/can.2024.1

M3 - Journal article

AN - SCOPUS:85190158253

JO - Canadian Journal of Philosophy

JF - Canadian Journal of Philosophy

SN - 0045-5091

ER -

ID: 389904615