Advanced search
1 file | 697.65 KB Add to list

Beyond transparency and explainability : on the need for adequate and contextualized user guidelines for LLM use

Kristian Gonzalez Barman (UGent) , Nathan Wood (UGent) and Pawel Pawlowski (UGent)
Author
Organization
Project
Abstract
Large language models (LLMs) such as ChatGPT present immense opportunities, but without proper training for users (and potentially oversight), they carry risks of misuse as well. We argue that current approaches focusing predominantly on transparency and explainability fall short in addressing the diverse needs and concerns of various user groups. We highlight the limitations of existing methodologies and propose a framework anchored on user-centric guidelines. In particular, we argue that LLM users should be given guidelines on what tasks LLMs can do well and which they cannot, which tasks require further guidance or refinement by the user, and context-specific heuristics. We further argue that (some) users should be taught to refine and elaborate adequate prompts, be provided with good procedures for prompt iteration, and be taught efficient ways to verify outputs. We suggest that for users, shifting away from looking at the technology itself, but rather looking at the usage of it within contextualized sociotechnical systems, can help solve many issues related to LLMs. We further emphasize the role of real-world case studies in shaping these guidelines, ensuring they are grounded in practical, applicable strategies. Like any technology, risks of misuse can be managed through education, regulation, and responsible development.
Keywords
Large language models (LLMs), User guidelines, Explainable artificial intelligence (XAI), AI ethics, CHATGPT, MODELS

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 697.65 KB

Citation

Please use this url to cite or link to this publication:

MLA
Gonzalez Barman, Kristian, et al. “Beyond Transparency and Explainability : On the Need for Adequate and Contextualized User Guidelines for LLM Use.” ETHICS AND INFORMATION TECHNOLOGY, vol. 26, no. 3, 2024, doi:10.1007/s10676-024-09778-2.
APA
Gonzalez Barman, K., Wood, N., & Pawlowski, P. (2024). Beyond transparency and explainability : on the need for adequate and contextualized user guidelines for LLM use. ETHICS AND INFORMATION TECHNOLOGY, 26(3). https://doi.org/10.1007/s10676-024-09778-2
Chicago author-date
Gonzalez Barman, Kristian, Nathan Wood, and Pawel Pawlowski. 2024. “Beyond Transparency and Explainability : On the Need for Adequate and Contextualized User Guidelines for LLM Use.” ETHICS AND INFORMATION TECHNOLOGY 26 (3). https://doi.org/10.1007/s10676-024-09778-2.
Chicago author-date (all authors)
Gonzalez Barman, Kristian, Nathan Wood, and Pawel Pawlowski. 2024. “Beyond Transparency and Explainability : On the Need for Adequate and Contextualized User Guidelines for LLM Use.” ETHICS AND INFORMATION TECHNOLOGY 26 (3). doi:10.1007/s10676-024-09778-2.
Vancouver
1.
Gonzalez Barman K, Wood N, Pawlowski P. Beyond transparency and explainability : on the need for adequate and contextualized user guidelines for LLM use. ETHICS AND INFORMATION TECHNOLOGY. 2024;26(3).
IEEE
[1]
K. Gonzalez Barman, N. Wood, and P. Pawlowski, “Beyond transparency and explainability : on the need for adequate and contextualized user guidelines for LLM use,” ETHICS AND INFORMATION TECHNOLOGY, vol. 26, no. 3, 2024.
@article{01J6S0303F45DK61J0EKXCXHDM,
  abstract     = {{Large language models (LLMs) such as ChatGPT present immense opportunities, but without proper training for users (and potentially oversight), they carry risks of misuse as well. We argue that current approaches focusing predominantly on transparency and explainability fall short in addressing the diverse needs and concerns of various user groups. We highlight the limitations of existing methodologies and propose a framework anchored on user-centric guidelines. In particular, we argue that LLM users should be given guidelines on what tasks LLMs can do well and which they cannot, which tasks require further guidance or refinement by the user, and context-specific heuristics. We further argue that (some) users should be taught to refine and elaborate adequate prompts, be provided with good procedures for prompt iteration, and be taught efficient ways to verify outputs. We suggest that for users, shifting away from looking at the technology itself, but rather looking at the usage of it within contextualized sociotechnical systems, can help solve many issues related to LLMs. We further emphasize the role of real-world case studies in shaping these guidelines, ensuring they are grounded in practical, applicable strategies. Like any technology, risks of misuse can be managed through education, regulation, and responsible development.}},
  articleno    = {{47}},
  author       = {{Gonzalez Barman, Kristian and Wood, Nathan and Pawlowski, Pawel}},
  issn         = {{1388-1957}},
  journal      = {{ETHICS AND INFORMATION TECHNOLOGY}},
  keywords     = {{Large language models (LLMs),User guidelines,Explainable artificial intelligence (XAI),AI ethics,CHATGPT,MODELS}},
  language     = {{eng}},
  number       = {{3}},
  pages        = {{12}},
  title        = {{Beyond transparency and explainability : on the need for adequate and contextualized user guidelines for LLM use}},
  url          = {{http://doi.org/10.1007/s10676-024-09778-2}},
  volume       = {{26}},
  year         = {{2024}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: