Researchers find that a modest amount of fine-tuning can undo safety efforts that aim to prevent LLMs such as OpenAI's GPT-3.5 Turbo from spewing toxic content (Thomas Claburn/The Register) 15-10-2023
Thomas Claburn / The Register: Researchers find that a modest amount of fine-tuning can undo safety efforts that aim to prevent LLMs such as OpenAI’s GPT-3.5 Turbo from spewing toxic content — OpenAI GPT-3.5 Turbo chatbot defenses dissolve with ‘20 cents’ of API tickling — The “guardrails” created to prevent large language models … Lees verder op Tech Meme