We consider a successive refinement source coding model in which each receiver observes its own side-information causally and is required to form its reconstruction in a causal manner. Furthermore, a one way conference link of given capacity allows Decoder 1---the helper---to send causal descriptions of what it has received so far to Decoder 2. A complete characterization of the rate distortion region is provided. The optimal helper is a scalar quantizer of its side-information which depends on the corresponding first-stage message symbol sent by the encoder. Thus, the imposition of the causality constraint on the side information as well as on the helper's conference both renders the problem tractable and simplifies the optimal code.
In this paper, we present new results on the achievable rate-distortion regions in networked scalable compression problems, based on a flexible codebook generation and binning method. First, we consider the problem of scalable coding in the presence of decoder side information, for which the prior work analyzed the two important cases the degraded side information where source $X$ and the side information variables ($Y_1, Y_2$) form a Markov chain in the order of either $X-Y_1-Y_2$ or $X-Y_2-Y_1$. First, we present an example non-Markov side information scenario where the proposed coding strategy achieves a strictly larger rate-distortion region compared to prior work. We then consider the problem of multi-user successive refinement where different users that are connected to a central server via links with different noiseless capacities strive to reconstruct the source in a progressive fashion. It is shown that a prior rate-distortion region is suboptimal in general, albeit its optimality for a Gaussian source with MSE distortion, and the proposed coding scheme achieves points beyond the achievable region of prior work.
In a seminal work Korner and Marton showed that for computing the module-two sum of doubly symmetric binary sources, linear codes achieved the optimal rates and outperformed random coding and binning based arguments. Korner also showed the optimality of Slepian-Wolf based random coding for the same problem for a different class of pairwise distributions. We show that the optimal sum-rate is given by linear codes for a larger class of binary distributions, thus extending the optimality results for this problem.
This paper studies the problem of discriminating two multivariate Gaussian distributions in a distributed manner. Specifically, it characterizes in a special case the optimal type-II error exponent as a function of the available communication rate. As a side-result, the paper also presents the optimal type-II error exponent of a slight generalization of the hypothesis testing against conditional independence problem where the marginal distributions under the two hypotheses can be different.