91青青草

Events

Rethinking Machine Unlearning For Large Language Models

LLM seminar event about the paper 鈥淩ethinking Machine Unlearning For Large Language Models鈥.
Image with writing about the presenter name, title, time and place of the event. Black background with a book

Title: Rethinking Machine Unlearning for Large Language Models

Presenter: Tamas Grosz

Abstract: The authors explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities, while maintaining the integrity of essential knowledge generation and not affecting causally unrelated information. They envision LLM unlearning becoming a pivotal element in the life-cycle management of LLMs, potentially standing as an essential foundation for developing generative AI that is not only safe, secure, and trustworthy, but also resource-efficient without the need of full retraining. They navigate the unlearning landscape in LLMs from conceptual formulation, methodologies, metrics, and applications. In particular, they highlight the often-overlooked aspects of existing LLM unlearning research, e.g., unlearning scope, data-model interaction, and multifaceted efficacy assessment. They also draw connections between LLM unlearning and related areas such as model editing, influence functions, model explanation, adversarial training, and reinforcement learning. Furthermore, they outline an effective assessment framework for LLM unlearning and explore its applications in copyright and privacy safeguards and sociotechnical harm reduction.

Paper link:

Disclaimer: The presenter is not part of the authors!

LLM seminar
  • Updated:
  • Published:
Share
URL copied!