Link Search Menu Expand Document

CS329D: ML Under Distribution Shifts

A graduate course surveying topics in machine learning when the training and test data arise from different distributions.

Tatsunori Hashimoto

Instructor

Office Hours: Mon 3-330 (G354)

Description

The progress of machine learning systems has seemed remarkable and inexorable — a wide array of benchmark tasks including image classification, speech recognition, and question answering have seen consistent and substantial accuracy gains year on year. However, these same models are known to fail consistently on atypical examples and domains not contained within the training data. This course will cover methods for understanding and improving machine learning under distributional shifts, where the training and test distribution for a model are mismatched.

Course goals

The course aims to cover recent research on the following topics:

  • Definition of various distribution shifts in terms of distributional overlap or as the result of changes to the environment.
  • Real-world distribution shifts: domain adaptation in NLP and vision as well as fairness in prediction tasks.
  • Methods for improving robustness: neural approaches, invariance constraints, and minimax losses.
  • Adversarial shifts: adversarial examples in image recognition, provable defenses, and data poisoning.

The goal of the course is to introduce the variety of areas in which distributional shifts are central and equip students with the fundamentals necessary to conduct research on developing more robust machine learning methods. Because of this goal, the course will aim to cover the classic papers and basic concepts in this area, rather than spend the quarter on any single task or problem.

Course activities

The course will consist of three kinds of activities

  • Lectures: The course will consist of 10 lectures, covering domain adaptation theory and methods, representation-based approaches to robustness, minimax methods, adversarial examples, and data poisoning.
  • Paper discussions: There will be 9 student driven discussion and critique sessions in which we go over and discuss selected papers in each area.
  • Project: Each student will be responsible for implementing and testing one of the methods from the class on a distribution shift task of their choice.

The instructors will have open office hours on zoom. Please check canvas for the zoom link (this is to restrict the office hours to enrolled students).

For details on grading and other accommodations see the course policies

Logistics

All lectures and discussions will be held in person. We will make our best effort to record and post lectures and discussions on Canvas in a timely fashion. You will be submitting all assignments via Gradescope, and you will be automatically added in the first week of instruction. We will have course announcements on Ed, which you can join using the access code shared on Canvas. If you would like to contact the course staff, please make a Ed post or email us.

Weekly Schedule

Week-to-week schedule and papers covered are tentative, and may change within the first week of the quarter.

Introduction and taxonomy of distribution shifts

Apr 3
Introduction
Lecture
  1. Overview of the course
  2. Distribution shifts in the real world
  3. A taxonomy of distribution shifts and how they arise
Apr 5
Covariate and label shifts
Lecture + Discussion
  1. What is a covariate shift?
  2. Handling covariate shift under distribuitonal overlap.
  3. Shortcut Learning in Deep Neural Networks
Apr 10
Covariate and label shifts 2
Discussion
  1. Improving Predictive Inference Under Covariate Shift by Weighting the Log-Likelihood
  2. Adjusting the Outputs of a Classifier to New a Priori Probabilities: A Simple Procedure

Domain adaptation theory

Apr 12
Domain adaptation
Lecture
  1. When can we provably learn under distribution shift?
  2. Defining generalization bounds under distribution shift.
  3. Adversarial approaches to neural domain adaptation.
Apr 17
Domain adaptation 2
Discussion
  1. A Theory of Learning from Different Domains
  2. Domain Adversarial Training of Neural Networks

Neural and representation-based methods

Apr 19
Neural domain adaptation
Lecture
  1. Provable guarantees from representational indistinguishability
  2. Self-training based domain adaptation
  3. Self-supervision based domain adaptation
Apr 24
Neural domain adaptation 2
Discussion
  1. Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
  2. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Apr 26
Empirical phenomena in robust machine learning
Lecture
  1. How do different robustness interventions fare in practice?
  2. Can (data augmentation / unlabeled data / bigger models) help?
May 1
Empirical phenomena in robust machine learning 2 + Project (Progress report due)
Discussion
  1. Is a caption worth a thousand images? a controlled study for representation learning
  2. Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization

Robustness and domain generalization

May 3
Connections to causality
Lecture
  1. Distribution shifts as arising from causal interventions.
  2. Existing connections between causality and robustness.
  3. Robustness and invariance as tools for causal inference.
May 8
Connections to causality 2
Discussion
  1. Conditional Variance Penalties and Domain Shift Robustness
  2. Invariant Risk Minimization
May 10
Minimax methods
Lecture
  1. Robustness as a minimax game between nature and the model.
  2. Tractable families of worst-case distributions and duality.
  3. Pitfalls and pessimism from worst-case bounds.
May 15
Minimax methods 2
Discussion
  1. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization
  2. Certifiying Some Distributional Robustness with Principled Adversarial Training

Adversarial robustness

May 17
Adversarial examples
Lecture
  1. Defining and motivating adversarial examples.
  2. Heuristic defenses and their pitfalls
  3. Provable defenses.
May 22
Adversarial examples 2
Discussion
  1. Unlabeled Data Improves Adversarial Robustness
  2. Certified Adversarial Robustness via Randomized Smoothing
May 24
Data poisoning
Lecture
  1. What is data poisoning?
  2. Robust statistics and high-dimensional mean estimation
  3. Convex optimization under data poisoning
May 29
Memorial day
Holiday
May 31
Data Poisoning 2
Discussion
  1. Poisoning Web-Scale Training Datasets is Practical
  2. SEVER: A Robust Meta-Algorithm for Stochastic Optimization
June 5
NO CLASS - Report due
Project
June 7
Short project presentations
Project