Abstract
We introduce a gain function viewpoint of information leakage by proposing maximal g-leakage, a rich class of operationally meaningful leakage measures that subsumes recently introduced leakage measures-maximal leakage and maximal alpha-leakage. In maximal g-leakage, the gain of an adversary in guessing an unknown random variable is measured using a gain function applied to the probability of correctly guessing. In particular, maximal g-leakage captures the multiplicative increase, upon observing Y , in the expected gain of an adversary in guessing a randomized function of X , maximized over all such randomized functions. We also consider the scenario where an adversary can make multiple attempts to guess the randomized function of interest. We show that maximal leakage is an upper bound on maximal g-leakage under multiple guesses, for any non-negative gain function g. We obtain a closed-form expression for maximal g-leakage under multiple guesses for a class of concave gain functions. We also study maximal g-leakage measure for a specific class of gain functions related to the alpha-loss, that interpolates log-loss ( alpha =1 ) and (soft) 0-1 loss ( alpha =infty & ). In particular, we first completely characterize the minimal expected alpha-loss under multiple guesses and analyze how the corresponding leakage measure is affected with the number of guesses. We show that a new measure of divergence that belongs to the class of Bregman divergences captures the relative performance of an arbitrary adversarial strategy with respect to an optimal strategy in minimizing the expected alpha-loss. Finally, we study two variants of maximal g-leakage depending on the type of adversary and obtain closed-form expressions for them, which do not depend on the particular gain function considered as long as it satisfies some mild regularity conditions. We do this by developing a variational characterization for the Rényi divergence of order infinity which naturally generalizes the definition of pointwise maximal leakage to incorporate arbitrary gain functions.
Original language | English (US) |
---|---|
Pages (from-to) | 1349-1375 |
Number of pages | 27 |
Journal | IEEE Transactions on Information Theory |
Volume | 70 |
Issue number | 2 |
DOIs | |
State | Published - Feb 1 2024 |
Keywords
- Privacy leakage
- Rényi divergence
- Sibson mutual information
- gain function
- maximal leakage
- multiple guesses
ASJC Scopus subject areas
- Information Systems
- Library and Information Sciences
- Computer Science Applications