Genμ Challenge (U&ME Workshop @ ICCV'2025)
Genμ Challenge 2025 (U&ME Workshop) Logo Second Logo

About the Challenge

Generative models are at the forefront of AI innovation, with applications in text generation, image synthesis, and more. While this progress offers exciting opportunities, it also introduces new challenges, as we researchers bear the responsibility to understand and mitigate the potential risks associated with it. Notably, a key ingredient of recent advances is the role of massive datasets combined with ever-larger models. This aspect has implications for privacy, as generative models tend to memorize details of their training set. Concretely, the field faces a significant challenge meeting recent privacy regulations, such as the EU's General Data Protection Regulation (Mantelero, 2013) or Canada's Personal Information Protection and Electronic Documents Act, which stipulate that individuals have the "right to be forgotten".

The introduction of this legal notion has spurred the development of formal, mathematical notions of "deleting" or "obliterating" one's data, all studied under the auspices of "machine unlearning".

Informally, unlearning refers to removing the influence of a subset of the training set from the weights of a trained model. The development of novel formal models, their theoretical limitations, and efficient and scalable algorithms is a rich and growing subfield; see for example recent State-of-the-Art works like Concept Ablation(CA) proposed by Kumari et.al (2023), ESD proposed by Gandikota et.al (2023) and FADE proposed by Thakral et al. (2025)

Task and Data

The Genμ Challenge addresses concept unlearning in text-to-image generative models. Starting from the public checkpoint Stable Diffusion v1.4 , participants must make the model “forget” specific visual concepts while preserving its overall abilities.

Pipeline overview

Evaluation axes

Dataset. A CSV with prompts and the concepts to be unlearned is provided. It forms the basis for every leaderboard metric and automatic evaluation.

Participants are required to demonstrate their results using any one of the following three distinct unlearning strategies:

Evaluation Methodology:

  • Tie-breaker — Weight-Change Ratio
    \[ \frac{1}{N_{c}}\, \sum_{i=1}^{N_{c}} \lVert\theta_{\text{orig}}^{\,i}-\theta_{\text{un}}^{\,i}\rVert \;\Big/\; \text{Total parameters}, \] where \(\theta_{\text{orig}}\) and \(\theta_{\text{un}}\) are the original and un-learnt model weights.
  • Notation: \(N_{img}\) — images per concept  |   \(N_{c_{tar}}\) — target concepts  |   \(N_{c_{ret}}\) — retention-test concepts  |   \(N_{c_{adj}}\) — adjacent/correlated concepts.
  • Baselines:

    Please refer to Baselines for reference

    Useful Links

    Timeline

    • Registration (21 May – 1  - Still Open)
      Complete the official registration form .
    • Submission & Self-Reporting Evaluation (1 June – Still Open)
      • Upload the CSV produced by the evaluation script on Google Colab .
      • Entries are auto-graded on the Kaggle leaderboard , which updates in real time (Please Note: Due to some glitches on Kaggle, we are considering only the final submission of models via email or g-form to us.)
      • Only the specified target concepts are unlearned and tested via direct, indirect, and adversarial prompts during this online stage.
      • Final model submission Link: Link
    • Final Model Upload (1 July – Still Open)
      Submit 20 single-concept unlearned models or one multi-concept / continual-unlearning model. Detailed upload instructions will be circulated to registered teams.
    • Independent Model Review (7 August – Still Open)
      The evaluation committee conducts a full offline assessment, including retention tests, adjacent-concept generalisation, and a tie-breaker based on the magnitude of weight changes required for unlearning (smaller changes rank higher).

    Call for Paper Phase: The top performing teams in the challenge will receive an exclusive invitation to co-author a research paper (provided that their methodology beats the state-of-the-art baselines which will be shared with them soon) with our challenge and workshop organizing team, highlighting their innovative approaches and findings.

    Winners will be officially announced tentatively soon.

    Announcements

    Announcements about the competition are below:

    Challenge Submission Deadline: Still accepting Responses
    Registration Link is now open [click here]
    Challenge is now active and accepting responses. [click here]
    Updated Evaluation Tab with FADE Baseline: Link to the Code

    Acknowledgements

    We gratefully acknowledge the collaboration and support of IndiaAI and the Srijan Centre for their invaluable contributions in working with the IITJ team to make this challenge possible.