Advanced search
1 file | 222.94 KB Add to list

Persuasion with large language models : a survey

Alexander Rogiers (UGent) , Sander Noels (UGent) , Maarten Buyl (UGent) and Tijl De Bie (UGent)
(2024) ARXIV.
Author
Organization
Project
Abstract
The rapid rise of Large Language Models (LLMs) has created new disruptive possibilities for persuasive communication, by enabling fully-automated personalized and interactive content generation at an unprecedented scale. In this paper, we survey the research field of LLM-based persuasion that has emerged as a result. We begin by exploring the different modes in which LLM Systems are used to influence human attitudes and behaviors. In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness. We identify key factors influencing their effectiveness, such as the manner of personalization and whether the content is labelled as AI-generated. We also summarize the experimental designs that have been used to evaluate progress. Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks, including the spread of misinformation, the magnification of biases, and the invasion of privacy. These risks underscore the urgent need for ethical guidelines and updated regulatory frameworks to avoid the widespread deployment of irresponsible and harmful LLM Systems.

Downloads

  • survey paper main.pdf
    • full text (Author's original)
    • |
    • open access
    • |
    • PDF
    • |
    • 222.94 KB

Citation

Please use this url to cite or link to this publication:

MLA
Rogiers, Alexander, et al. “Persuasion with Large Language Models : A Survey.” ARXIV, 2024, doi:10.48550/arXiv.2411.06837.
APA
Rogiers, A., Noels, S., Buyl, M., & De Bie, T. (2024). Persuasion with large language models : a survey. https://doi.org/10.48550/arXiv.2411.06837
Chicago author-date
Rogiers, Alexander, Sander Noels, Maarten Buyl, and Tijl De Bie. 2024. “Persuasion with Large Language Models : A Survey.” ARXIV. https://doi.org/10.48550/arXiv.2411.06837.
Chicago author-date (all authors)
Rogiers, Alexander, Sander Noels, Maarten Buyl, and Tijl De Bie. 2024. “Persuasion with Large Language Models : A Survey.” ARXIV. doi:10.48550/arXiv.2411.06837.
Vancouver
1.
Rogiers A, Noels S, Buyl M, De Bie T. Persuasion with large language models : a survey. ARXIV. 2024.
IEEE
[1]
A. Rogiers, S. Noels, M. Buyl, and T. De Bie, “Persuasion with large language models : a survey,” ARXIV. 2024.
@misc{01JD4M72P2722T174X0YZHBB0B,
  abstract     = {{The rapid rise of Large Language Models (LLMs) has created new disruptive
possibilities for persuasive communication, by enabling fully-automated
personalized and interactive content generation at an unprecedented scale. In
this paper, we survey the research field of LLM-based persuasion that has
emerged as a result. We begin by exploring the different modes in which LLM
Systems are used to influence human attitudes and behaviors. In areas such as
politics, marketing, public health, e-commerce, and charitable giving, such LLM
Systems have already achieved human-level or even super-human persuasiveness.
We identify key factors influencing their effectiveness, such as the manner of
personalization and whether the content is labelled as AI-generated. We also
summarize the experimental designs that have been used to evaluate progress.
Our survey suggests that the current and future potential of LLM-based
persuasion poses profound ethical and societal risks, including the spread of
misinformation, the magnification of biases, and the invasion of privacy. These
risks underscore the urgent need for ethical guidelines and updated regulatory
frameworks to avoid the widespread deployment of irresponsible and harmful LLM
Systems.}},
  author       = {{Rogiers, Alexander and Noels, Sander and Buyl, Maarten and De Bie, Tijl}},
  language     = {{eng}},
  pages        = {{16}},
  series       = {{ARXIV}},
  title        = {{Persuasion with large language models : a survey}},
  url          = {{http://doi.org/10.48550/arXiv.2411.06837}},
  year         = {{2024}},
}

Altmetric
View in Altmetric