About Me
I am a Ph.D. student at the Provable Responsible AI and Data Analytics (PRADA) Lab at the King Abdullah University of Science and Technology (KAUST), advised by Prof. Di Wang.
Before that, I was an algorithm engineer at the trustworthy AI research group at JD Explore Academy, JD.com, Inc. I received MPhil in Computer Science from The University of Sydney, advised by Prof. Dacheng Tao, and B.Sc in Mathematics from the South China University of Technology, advised by Prof. Chuhua Xian.
Contact
shaopeng.fu@kaust.edu.sa
shaopengfu15@gmail.com
Pinned
PRADA Lab is looking for Postdocs/PhDs/Interns. Please checkout this page.
Research Interests
My research lies in trustworthy AI. I am interested in using mathematical principles to identify and mitigate security and privacy risks in real-world machine learning systems. Currently, I am working on:
- Adversarial Robustness of Pre-trained Models
- Privacy-preserving Ability of Pre-trained Models
News
- 09/2024: I accepted the invitation to serve as a reviewer for AISTATS 2025.
- 08/2024: I accepted the invitation to serve as a reviewer for ICLR 2025.
- 08/2024: We released our new paper “Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services”.
- 07/2024: I accepted the invitation to serve as a PC member for AAAI 2025.
- 05/2024: I passed my Qualifying Exam!
- 05/2024: I accepted the invitation to serve as a reviewer for NeurIPS 2024.
- 04/2024: I will serve as an AEC member for CCS 2024.
- 01/2024: Our paper on robust overfitting and NTK was accepted to ICLR 2024!
- 12/2023: I accepted the invitation to serve as a reviewer for ICML 2024.
- 10/2023: We released our new paper “Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach”.
- 09/2023: I accepted the invitation to serve as a reviewer for AISTATS 2024.
Selected Publications [Full List] [Google Scholar]
* indicates co-first authors.
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
[arXiv] [Code]
Shaopeng Fu, Xuexue Sun, Ke Qing, Tianhang Zheng, and Di Wang
arXiv preprint 2024Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach
[Link] [arXiv] [Video] [Code]
Shaopeng Fu and Di Wang
ICLR 2024Robust Unlearnable Examples: Protecting Data Against Adversarial Learning
[Link] [arXiv] [Video] [Code]
Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, and Dacheng Tao
ICLR 2022Knowledge Removal in Sampling-based Bayesian Inference
[Link] [arXiv] [Video] [Code]
Shaopeng Fu*, Fengxiang He*, and Dacheng Tao
ICLR 2022
Services
- Conference Reviewer: ICML (2022, 2023, 2024) / ICLR (2022, 2023, 2024, 2025) / NeurIPS (2021, 2022, 2023, 2024) / AISTATS (2021, 2024, 2025)
- Conference Committee: CCS 2024 (Artifact Evaluation) / AAAI 2025
- Journal Reviewer: IEEE TPAMI / IEEE TCYB / Springer NPL
Teaching
- Teaching Assistant of CS 229: Machine Learning, Spring 2024 @ KAUST
Selected Awards
- International Collegiate Programming Contest (ICPC)
- The ICPC Asia-East Continent Final Xi’an Site, Silver Medal, 2018
- The ICPC Asia Regional Contest Qingdao Site, Silver Medal, 2018
- The ICPC Asia Regional Contest Shenyang Site, Gold Medal (Rank: 6/186), 2018
- The ACM-ICPC Asia Regional Contest Xi’an Site, Silver Medal, 2017
- National Scholarship, Ministry of Education of P.R. China, 2017 & 2018