Few-shot out-of-scope intent classification : analyzing the robustness of prompt-based learning
- Author
- Yiwei Jiang (UGent) , Maarten De Raedt (UGent) , Johannes Deleu (UGent) , Thomas Demeester (UGent) and Chris Develder (UGent)
- Organization
- Project
- Abstract
- Out-of-scope (OOS) intent classification is an emerging field in conversational AI research. The goal is to detect out-of-scope user intents that do not belong to a predefined intent ontology. However, establishing a reliable OOS detection system is challenging due to limited data availability. This situation necessitates solutions rooted in few-shot learning techniques. For such few-shot text classification tasks, prompt-based learning has been shown more effective than conventionally finetuned large language models with a classification layer on top. Thus, we advocate for exploring prompt-based approaches for OOS intent detection. Additionally, we propose a new evaluation metric, the Area Under the In-scope and Out-of-Scope Characteristic curve (AU-IOC). This metric addresses the shortcomings of current evaluation standards for OOS intent detection. AU-IOC provides a comprehensive assessment of a model's dual performance capacities: in-scope classification accuracy and OOS recall. Under this new evaluation method, we compare our prompt-based OOS detector against 3 strong baseline models by exploiting the metadata of intent annotations, i.e., intent description. Our study found that our prompt-based model achieved the highest AU-IOC score across different data regimes. Further experiments showed that our detector is insensitive to a variety of intent descriptions. An intriguing finding shows that for extremely low data settings (1- or 5-shot), employing a naturally phrased prompt template boosts the detector's performance compared to rather artificially structured template patterns.
- Keywords
- DOMAIN DETECTION, Few-shot learning, Prompt-based models, Outlier/novelty detection, Dialogue intent classification
Downloads
-
8463 acc.pdf
- full text (Accepted manuscript)
- |
- open access
- |
- |
- 1.93 MB
-
(...).pdf
- full text (Published version)
- |
- UGent only
- |
- |
- 3.79 MB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-01HN2WAV97S5VD80ZV6Z54RMKK
- MLA
- Jiang, Yiwei, et al. “Few-Shot out-of-Scope Intent Classification : Analyzing the Robustness of Prompt-Based Learning.” APPLIED INTELLIGENCE, vol. 54, no. 2, 2024, pp. 1474–96, doi:10.1007/s10489-023-05215-x.
- APA
- Jiang, Y., De Raedt, M., Deleu, J., Demeester, T., & Develder, C. (2024). Few-shot out-of-scope intent classification : analyzing the robustness of prompt-based learning. APPLIED INTELLIGENCE, 54(2), 1474–1496. https://doi.org/10.1007/s10489-023-05215-x
- Chicago author-date
- Jiang, Yiwei, Maarten De Raedt, Johannes Deleu, Thomas Demeester, and Chris Develder. 2024. “Few-Shot out-of-Scope Intent Classification : Analyzing the Robustness of Prompt-Based Learning.” APPLIED INTELLIGENCE 54 (2): 1474–96. https://doi.org/10.1007/s10489-023-05215-x.
- Chicago author-date (all authors)
- Jiang, Yiwei, Maarten De Raedt, Johannes Deleu, Thomas Demeester, and Chris Develder. 2024. “Few-Shot out-of-Scope Intent Classification : Analyzing the Robustness of Prompt-Based Learning.” APPLIED INTELLIGENCE 54 (2): 1474–1496. doi:10.1007/s10489-023-05215-x.
- Vancouver
- 1.Jiang Y, De Raedt M, Deleu J, Demeester T, Develder C. Few-shot out-of-scope intent classification : analyzing the robustness of prompt-based learning. APPLIED INTELLIGENCE. 2024;54(2):1474–96.
- IEEE
- [1]Y. Jiang, M. De Raedt, J. Deleu, T. Demeester, and C. Develder, “Few-shot out-of-scope intent classification : analyzing the robustness of prompt-based learning,” APPLIED INTELLIGENCE, vol. 54, no. 2, pp. 1474–1496, 2024.
@article{01HN2WAV97S5VD80ZV6Z54RMKK, abstract = {{Out-of-scope (OOS) intent classification is an emerging field in conversational AI research. The goal is to detect out-of-scope user intents that do not belong to a predefined intent ontology. However, establishing a reliable OOS detection system is challenging due to limited data availability. This situation necessitates solutions rooted in few-shot learning techniques. For such few-shot text classification tasks, prompt-based learning has been shown more effective than conventionally finetuned large language models with a classification layer on top. Thus, we advocate for exploring prompt-based approaches for OOS intent detection. Additionally, we propose a new evaluation metric, the Area Under the In-scope and Out-of-Scope Characteristic curve (AU-IOC). This metric addresses the shortcomings of current evaluation standards for OOS intent detection. AU-IOC provides a comprehensive assessment of a model's dual performance capacities: in-scope classification accuracy and OOS recall. Under this new evaluation method, we compare our prompt-based OOS detector against 3 strong baseline models by exploiting the metadata of intent annotations, i.e., intent description. Our study found that our prompt-based model achieved the highest AU-IOC score across different data regimes. Further experiments showed that our detector is insensitive to a variety of intent descriptions. An intriguing finding shows that for extremely low data settings (1- or 5-shot), employing a naturally phrased prompt template boosts the detector's performance compared to rather artificially structured template patterns.}}, author = {{Jiang, Yiwei and De Raedt, Maarten and Deleu, Johannes and Demeester, Thomas and Develder, Chris}}, issn = {{0924-669X}}, journal = {{APPLIED INTELLIGENCE}}, keywords = {{DOMAIN DETECTION,Few-shot learning,Prompt-based models,Outlier/novelty detection,Dialogue intent classification}}, language = {{eng}}, number = {{2}}, pages = {{1474--1496}}, title = {{Few-shot out-of-scope intent classification : analyzing the robustness of prompt-based learning}}, url = {{http://doi.org/10.1007/s10489-023-05215-x}}, volume = {{54}}, year = {{2024}}, }
- Altmetric
- View in Altmetric
- Web of Science
- Times cited: