Dataset distillation on MNIST and CIFAR10

1
60K images 10 images distill 50K images 94% accuracy 54% accuracy train 100 images Fixed init 100 images encoding domain difference 85% MNIST accuracy 300 attack images distill train train train Fixed init 52% MNIST accuracy Trained for SVHN distill 60K images 50K images distill Trained for CIFAR10 82% accuracy on class “plane” Attacked Model 7% accuracy on class “plane” 73K images Dataset distillation on MNIST and CIFAR10 Dataset distillation can quickly fine-tune pre-trained networks on new datasets Dataset distillation can maliciously attack classifier networks 13% accuracy 9% accuracy

Transcript of Dataset distillation on MNIST and CIFAR10

Page 1: Dataset distillation on MNIST and CIFAR10

60K images 10 images

distill

50K images

94% accuracy

54% accuracy

train

100 images

Fixed init

100 imagesencoding domain difference

85% MNISTaccuracy

300 attack images

distill train

train

train

Fixed init

52% MNIST accuracy

Trained for SVHN

distill

60K images

50K images

distill Trained for CIFAR10

82% accuracyon class “plane”

Attacked Model

7% accuracyon class “plane”

73K images

Dataset distillation on MNIST and CIFAR10

Dataset distillation can quickly fine-tune pre-trained networks on new datasets

Dataset distillation can maliciously attack classifier networks

13% accuracy

9% accuracy