In this paper, we formulate and solve a two-stage Bayesian sequential change diagnosis problem. Different from the one-stage sequential change diagnosis problem considered in the existing work, after a change has been detected, we can continue to collect samples so that we can identify the distribution after change more accurately. The goal is to minimize the total cost including delay, false alarm, and mis-diagnosis probabilities. We first convert the two-stage sequential change diagnosis problem into a two-ordered optimal stopping time problem. Using tools from multiple optimal stopping time problems, we obtain the optimal changed detection and distribution identification rules.
In the sequential change-point detection problem for multi-stream data, it is assumed that there are M processes in a system and at some unknown time, an occurring event impacts one unknown local process in the sense of changing the distribution of observations from that affected local process. In this paper, we consider such problem under the sampling control constraint, in which one is able to take observations from only one of the local processes at each time step. Our objective is to design an adaptive sampling policy and a stopping time policy that is able to raise a correct alarm as quickly as possible subject to the false alarm and sampling control constraint. We develop an efficient sequential change-point detection algorithm under the sampling control that turns out to be second-order asymptotically optimal under the full data scenario. That is, with the sampling rate that is only 1/M of the full data scenario, our proposed algorithm has the same performance up to second-order as the optimal procedure under the full data scenario.
The problem of detecting and isolating a correlated pair among multiple Gaussian information sources is considered. It is assumed that there is at most one pair of correlated sources and that observations from all sources are acquired sequentially. The goal is to stop sampling as quickly as possible, declare upon stopping whether there is a correlated pair or not, and if yes, to identify it. Specifically, it is required to control explicitly the probabilities of three kinds of error: false alarm, missed detection, wrong identification. We propose a procedure that not only controls these error metrics, but also achieves the smallest possible average sample size, to a first-order approximation, as the target error rates go to 0. Finally, a simulation study is presented in which the proposed rule is compared with an alternative sequential testing procedure that controls the same error metrics.
Mixture-based algorithms are proposed for detecting a change in the distribution of a statistically periodic process in multislot and multistream settings. In the multislot change detection problem, the distribution of the observed process can change in any subset of time slots in each period. In the multistream change detection problem, there are parallel streams of observations, and the change can affect an arbitrary subset of streams. It is shown that the algorithms are asymptotically optimal in a Bayesian setting.
The problem of sequential binary hypothesis testing in an adversarial environment is investigated. Specifically, if there is no adversary, the samples are generated independently by a distribution $p$; and if the adversary is present, the samples are generated independently by another distribution $q$. The adversary picks a distribution $q\in\mathcal Q$ with cost $c(q)$. The goal of the defender is to decide whether there is an adversary using samples as few as possible; and the goal of the adversary is to fool the defender. The problem is formulated as a non-zero-sum game between the adversary and the defender. A pair of strategies (attack strategy from the adversary and the sequential hypothesis testing scheme from the detector) is proposed and proved to be a Nash equilibrium pair for the non-zero-sum game asymptotically. Numerical experiments are provided to validate our results.