Let n \in \mathbb{N}, \varepsilon\geq 0 and \delta = o(n). Let \mathcal{Q}= \{f: \mathcal{X}^n \rightarrow \mathcal{Y}\} denote a family of functions. Further, assume that there is a probability measure on \mathcal{Y}. A mechanism \mathsf{M}: \mathcal{X}^n \times \mathcal{Q}\rightarrow \Delta(\mathcal{Y}) is said to satisfy (\varepsilon, \delta) differential privacy if for two neighbouring datasets X and X^\prime and for query f \in \mathcal{Q} and T \subseteq \mathcal{Y}, we have
\begin{align} \Pr_{Y \textcolor{Salmon}{\xleftarrow{\char"1F4B0}}\mathsf{M}(X, f)}{[ Y \in T]} &\leq e^{\epsilon}\Pr_{Y' \textcolor{Salmon}{\xleftarrow{\char"1F4B0}}\mathsf{M}(X', f)}{[Y' \in T]} + \delta \end{align}
Fix \kappa\in \mathbb{N} and n= \mathtt{poly}(\kappa). Let \epsilon \geq 0 and \delta be a negligible function in \kappa, and let \mathsf{M}= \{ \mathsf{M}_\kappa\times \mathcal{Q}: \mathcal{X}^n_\kappa\rightarrow \mathcal{Y}_\kappa\}_{\kappa\in \mathbb{N}} be a family of mechanisms. We say that \mathsf{M} is computationally (\varepsilon, \delta)-differentially private if for all but finitely many values of \kappa, for every non-uniform PPT TM’s (``distinguisher’’) D, and for every query f\in \mathcal{Q}, and every neighbouring dataset X and X^\prime \in \mathcal{X}^n_\kappa, \forall T \subseteq \mathcal{Y}_\kappa, we have
\begin{equation*} \begin{split} \text{ }\Pr\left[ D(Y) \in T) = 1\right] &\leq e^{\epsilon}\text{ }\Pr\left[D(Y') \in T) = 1\right] + \delta(\kappa) \end{split} \end{equation*} where Y \textcolor{Salmon}{\xleftarrow{\char"1F4B0}}\mathsf{M}_\kappa(X, f) and Y' \textcolor{Salmon}{\xleftarrow{\char"1F4B0}}\mathsf{M}_\kappa(X', f) and
Differential Privacy (DP) is often presented as a strong privacy-enhancing technology with broad applicability and advocated as a de facto standard for releasing aggregate statistics on sensitive data. DP, by definition, prevents even a computationally unbounded adversary from learning too much information about the input used for the computation from the output. The privacy community has long wondered if reducing the computational power of the adversary could lead to DP protocols with additional utility. For example, in cryptography, by assuming the adversary is computationally bounded, we have been able to construct protocols for digital signatures (Nist, 1992), private key encryption (Katz and Yung, 2000), commitment schemes (Impagliazzo and Luby, 1989) etc., which have had an immense practical impact.
Open Problem 10.6 from (Vadhan, 2017) Is there a computational task solvable by a single curator with computational differential privacy but is impossible to achieve with information-theoretic differential privacy?
Over the last two decades, this question has received considerable attention. Before describing prior efforts to resolve this question, we first define what it means to be a computational task.
What Is A Computational Task
A computational “task” was formally defined in as an computable function u: \mathcal{X}^n \times \mathcal{Y}\rightarrow \{ 0, 1\} that takes as input a dataset X \in \mathcal{X}^n and the output y sampled from \mathsf{M}(X) for some DP mechanism \mathsf{M} on input X. y is considered ``useful’’ if and only if u(X, y)=1. The DP mechanism \mathsf{M} is said to be \alpha-useful if
\Pr_{y\textcolor{Salmon}{\xleftarrow{\char"1F4B0}}\mathsf{M}(X)}[ u(X, y)=1] \geq \alpha for some \alpha\in (0,1).
It is essential that u be efficiently computable.
If there exists a fully black box construction \{ f_k\}_{k \in \mathbb{N}} of an \varepsilon-IND-CDP mechanism, then there exists a (\varepsilon, \mathsf{negl}\left({k}\right))-DP family \{f_k'\}_{k \in \mathbb{N}} that is roughly as efficient, and for any utility1 u we have \begin{align*} \left| \underset{\textcolor{salmon}{y \textcolor{Salmon}{\xleftarrow{\char"1F4B0}}f_k^{\mathcal{O}}(X)}}{\mathbb{E}}\left[u(X, y)\right] - \underset{\textcolor{salmon}{y' \textcolor{Salmon}{\xleftarrow{\char"1F4B0}}f_k'(X)}}{\mathbb{E}}\left[u(X, y')\right] \right| \leq \mathsf{negl}\left({k}\right) \end{align*}
Fix some k \in \mathbb{N}. We have some IND-CDP mechanism f_k, that uses a one-way-function \mathcal{O} in a fully black box manner to make its output CDP. In particular, by fully black-box (Page 3, Section 1.1, RTV04), we mean that f_k
Thus the computational indistinguishability is coming from the hardness to invert the one-way messages the oracle sends. For the construction f_k', the proof relies on the assumption that that the privacy adversary does not have access2 to \mathcal{O}, and knows nothing about the implementation of \mathcal{O} either (This bit is critical!!).
The key idea is that - as the construction is fully black-box, f_k' can just sample a random permutation \pi: \{ 0, 1\}^k \rightarrow \{ 0, 1\}^k, and then simulate f_k with \pi as \mathcal{O}. A randomly chosen permutation is one-way by definition, and f_k does not care what kind of one-way function is used, just that its one-way oracle. Thus, f_k^\mathcal{O} is easily simulated with f_k^{\pi}. Now the privacy adversary has no clue what \pi is (as \pi was selected uniformly @ random), and it also cannot query \pi, to check if it successfully inverted \pi(x) for any x. Thus, with respect to inverting \pi, a computationally unbounded adversary has the same power has a computationally bounded adversary (who cannot query \mathcal{O}) has with respect to the one-way function \mathcal{O}. And as the indistinguishability came from the hardness to invert the one-way messages, we have indistinguishability against a computationally unbounded adversary as well.
Next we show that for low dimensional ranges i.e. \mathcal{Y}\subseteq \mathbb{R}^d, where d = O\left(\log{}k\right), we also cannot separate the two classes of DP algorithms. This was first shown by Groce, Katz, and Yerukhimovich, but Bun, Chen And Vadhan have a simpler more elegant proof of the claim. The intuition is as follows: Say I have some IND-CDP mechanism M_k that outputs y, which is \alpha_k accurate. Then the procedure of sampling a value uniformly at random from a ball of radius c_k = O\left(a_k\right), is clearly O\left(a_k\right) accurate (by the triangle inequality). Statistical privacy follows from the fact that doubling dimension of \mathbb{R}^d is \theta\left(d\right). See proof for theorem below. Detailed Proof of the above claim
Although we do not have a task which is impossible to do with SDP, Bun, Chen And Vadhan show a task for which there is no efficient SDP solution. By not efficient, we mean that there is no algorithm that runs in \mathtt{poly}(k), and that is SDP for which the output is useful for the task u with high probability. However, there is an CDP algorithm that is both efficient and useful.
Although this does not answer the question, the techniques used lay the conceptual groundwork towards finding a separation.
TODO
TODO