Do privacy-preserving techniques improve resilience to adversarial examples? -> privacy preservation & generalization are related Connections to stability; poisoning attacks Confidentiality of the model? Trusted HW. Federated & private ML -> no guarantees of confidentiality if centralization of updates Obfuscate, hide, perhaps poison the information given to the adversary (device containing ML model is captured). Model should be an oracle that cannot be examined to retrieve information about the training data (not possible to examine model parameters to recover training data). Extension: replicate the whole training set Can be difficult because of model extraction attacks Multiple agents needed to make a model useful (a single agent does not have enough information to reproduce the useful model). Protecting the training data Prevent the extraction of a useful model Reconstruct the model from input, output table Need for a key to activate model’s inference stage -> could be a “physical key”: ML model for a drone which only infers correctly on inputs produced by the drone’s HW Influence of latent features on the model predictions (these latent features could be trade secrets)