Chris (Yuhao) Liu

{yliu298 || chrisliu298} [at] ucsc [dot] edu

I am a first-year PhD student in Computer Science and Engineering at the University of California, Santa Cruz, advised by Yang Liu and Jeffrey Flanigan. I have had the privilege of being advised by Jeffrey Flanigan throughout my undergraduate and master's studies.

Currently, I am working on machine unlearning and alignment. Prior to this, I worked on the fundamental problems of modern deep learning with Jeffrey Flanigan. These problems include topics such as double descent, data scaling laws, and structural risk minimization.

Before my current studies, I earned both my master's and bachelor's degrees in Computer Science and Engineering at UC Santa Cruz.

CV  /  CV of Failure  /  Email  /  Google Scholar  /  Github  /  Blog  /  LinkedIn

Profile Photo
News
  • [March] 2024 I compiled a list of the latest papers on machine unlearning in large language models. The list is available here.
  • [December 2023] My paper on understanding the role of optimization in double descent has been accepted to the NeurIPS 2023 Optimization for Machine Learning Workshop. The paper is available here and the poster is available here.
  • [June 2023] I graduated from UC Santa Cruz with a Master's degree in Computer Science and Engineering. My Master's thesis is available here.
  • [March 2023] I am delighted to announce that I will be joining UC Santa Cruz as a PhD student in the fall of 2023. I have the privilege of being advised by Professor Yang Liu and Professor Jeffrey Flanigan.
Previous Events
  • [June 2021] I graduated from UC Santa Cruz with a Bachelor's degree in Computer Science and Engineering. I am excited to announce that I will be (re)joining Professor Jeffrey Flanigan's JLab as a Master's student in Fall 2021.
  • [June 2020] I joined Professor Jeffrey Flanigan's JLab and started to work on problems in neural scaling laws.
Research

These included publications and preprints.

Understanding the Role of Optimization in Double Descent
Chris Yuhao Liu, Jeffrey Flanigan
NeurIPS 2023 Optimization for Machine Learning Workshop
2023
[Paper] [Poster]

We demonstrate that the effect of all these disparate factors is unified into a single phenomenon from the viewpoint of optimization: double descent is observed if and only if the given optimizer can find a sufficiently low-loss minimum.

Understanding the Role of Optimization and Loss Function in Double Descent
Chris Yuhao Liu, Jeffrey Flanigan
2023
[Paper]

Overfitted models do not exhibit the double descent phenomenon due to 1) weak optimizers struggling to land at a low-loss local minimum and the 2) presence of an exponential tail in the shape of the loss function.

What Affects the Sample Complexity in Practice?
Chris Yuhao Liu, Jeffrey Flanigan
2022

We empirically estimate the power-law exponents of various model architectures and study how they are altered by a wide range of training conditions for classification.

Faster Sample Complexity Rates With Ensemble Filtering
Chris Yuhao Liu, Jeffrey Flanigan
2021

We present a dataset filtering approach that uses sets of classifiers, similar to ensembling, to estimate noisy (or non-realizable) examples and exclude them so a faster sample complexity rate is achievable in practice.

Other Projects

These include coursework and side projects.

Toward Disentangling Double Descent and Information Flow in Deep Neural Networks
Chris Yuhao Liu, Brendan, King, Jing Gu
2022
[Paper] [Code] [Slides]

We study the relationship between the amount of mutual information compression and generalization given the double descent phenomenon.

Learning to Extract Compact Vector Representations from Weight Matrices
Chris Yuhao Liu
2022
[Paper] [Code] [Slides]

We study the problem of learning to construct compact representations of neural network weight matrices by projecting them into a smaller space.

Understanding biased datasets and machines requires rethinking bias from scratch
Chris Yuhao Liu, Yuhang Gan, Zichao Li, Ruilin Zhou
2022

We surveyed recent works on dataset bias and machine learning bias.

Sample Complexity Scaling Laws For Adversarial Training
Chris Yuhao Liu
2020
[Paper] [Code]

We show that adversarially training (Fast Gradient Sign Method and Projected Gradient Descent) reduces the empirically sample complexity rate for MLP and a variety of CNN architectures on MNIST and CIFAR-10.

TAPT: Text Augmentation Using Pre-Trained Transformers With Reinforcement Learning
Chris Yuhao Liu
2020
[Code]

Trained distilled RoBERTa model as a text classifier and a GPT-2 as a text generator trained using proximal policy optimization synchronously to generate augmented text for text classification tasks.

Conditional Generation of Research Paper Abstracts
Chris Yuhao Liu
2020
[Paper] [Code]

Fine-tuned a GPT-2 model using all research paper titles and abstracts under cs.AI, cs.LG, cs.CL, and cs.CV on arXiv. This project was the winner of the Generative Modeling Competition for the course CSE142 Machine Learning in Spring 2020.

Sentiment Analysis With Transformers
Chris Yuhao Liu
2020
[Code]

Fine-tuned a RoBERTa model on the IMDb dataset for sentiment analysis. This project was the winner of the Sentiment Analysis Competition for the course CSE142 Machine Learning in Spring 2020.

Service

Teaching Research


This is a fork of Jon Barron's website.