Experimenting with Zero-Knowledge Proofs of Training

Abstract

How can a model owner prove they trained their model according to the correct specification? More importantly, how can they do so while preserving the privacy of the underlying dataset and the final model? We study this problem and formulate the notion of zero-knowledge proof of training (zkPoT), which formalizes rigorous security guarantees that should be achieved by a privacy-preserving proof of training. While it is theoretically possible to design zkPoT for any model using generic zero-knowledge proof systems, this approach results in extremely unpractical proof generation times. Towards designing a practical solution, we propose the idea of combining techniques from MPC-in-the-head and zkSNARKs literature to strike an appropriate trade-off between proof size and proof computation time. We instantiate this idea and propose a concretely efficient, novel zkPoT protocol for logistic regression.

Researchers

Sanjam Garg, UC Berkeley
Aarushi Goel, NTT Research
Somesh Jha, University of Wisconsin–Madison
Saeed Mahloujifar, Meta
Mohammad Mahmoody, University of Virginia
Guru-Vamsi Policharla, UC Berkeley
Mingyuan Wang, UC Berkeley
Accepted paper at CCS 2023: https://eprint.iacr.org/2023/1345