LLMGuard: Guarding Against Unsafe LLM Behavior

التفاصيل البيبلوغرافية
العنوان: LLMGuard: Guarding Against Unsafe LLM Behavior
المؤلفون: Goyal, Shubh, Hira, Medha, Mishra, Shubham, Goyal, Sukriti, Goel, Arnav, Dadu, Niharika, DB, Kirushikesh, Mehta, Sameep, Madaan, Nishtha
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Cryptography and Security, Computer Science - Machine Learning
الوصف: Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we present "LLMGuard", a tool that monitors user interactions with an LLM application and flags content against specific behaviours or conversation topics. To do this robustly, LLMGuard employs an ensemble of detectors.
Comment: accepted in demonstration track of AAAI-24
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.00826
رقم الأكسشن: edsarx.2403.00826
قاعدة البيانات: arXiv