Combined Feed
https://feeds.library.caltech.edu/people/Lorden-G/combined.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenWed, 06 Dec 2023 14:52:31 +0000Integrated risk of asymptotically bayes sequential tests
https://resolver.caltech.edu/CaltechAUTHORS:20121019-160110013
Authors: Lorden, Gary
Year: 1967
DOI: 10.1214/aoms/1177698696
For general multiple-decision testing problems, and even two-decision problems involving more than two states of nature, how to construct sequential procedures which are optimal (e.g. minimax, Bayes, or even admissible) is an open question. In the absence of optimality results, many procedures have been proposed for problems in this category. Among these are the procedures studied in Wald and Sobel (1949), DonnellY. (1957), Anderson (1960),
and Schwarz (1962), all of which are discussed in the introduction of the paper by Kiefer and Sacks (1963) along with investigations in sequential design of experiments (notably those of Chernoff (1959) and Albert (1961)) which can be regarded as considering, inter alia, the (non-design) sequential testing problem. The present investigation concerns certain procedures which are asymptotically Bayes as the cost per observation, c, approaches zero and are definable by a simple rule: continue sampling until the a posteriori risk of stopping is less than Qc (where Q is a fixed positive number), and choose a terminal decision having minimum a posteriori risk. This rule, with Q = 1, was first considered by Schwarz and was shown to be asymptotically Bayes, under mild assumptions, by Kiefer and Sacks (whose results easily extend to the case of arbitrary Q > 0). Given an a priori distribution, F, and cost per observation, c, we shall use δ_F( Qc) to denote the procedure defined by this rule and δ_F * (c) to denote a Bayes solution with respect to F and c. The result of Kiefer and Sacks, for Q = 1, states that rc(F, δF(c)),....., r_c(F, δ_F*(c)) as c ~ 0, where rc(F, δ) is the integrated risk of δ when F is the a priori distribution and c is the cost per observation. The principal aim of the present work is to construct upper bounds (valid for all c > 0) on the difference r_c(F, δF(Qc)) - rc(F, δF*(c)), so that one can determine values
of c (or the probabilities of error) small enough to insure that simple asymptotically optimum procedures are reasonably efficient.https://authors.library.caltech.edu/records/tvvgs-j9965On excess over the boundary
https://resolver.caltech.edu/CaltechAUTHORS:LORams70
Authors: Lorden, Gary
Year: 1970
A random walk, {Sn}∞n = 0 , having positive drift and starting at the origin, is stopped the first time Sn > t ≧ 0. The present paper studies the "excess," Sn - t, when the walk is stopped. The main result is an upper bound on the mean of the excess, uniform in t. Through Wald's equation, this gives an upper bound on the mean stopping time, as well as upper bounds on the average sample numbers of sequential probability ratio tests. The same elementary approach yields simple upper bounds on the moments and tail probabilities of residual and spent waiting times of renewal processes.https://authors.library.caltech.edu/records/8eyxa-q6789Procedures for reacting to a change in distribution
https://resolver.caltech.edu/CaltechAUTHORS:LORams71
Authors: Lorden, G.
Year: 1971
A problem of optimal stopping is formulated and simple rules are proposed which are asymptotically optimal in an appropriate sense. The problem is of central importance in quality control and also has applications in reliability theory and other areas.https://authors.library.caltech.edu/records/n9ryc-0sr08Likelihood ratio tests for sequential k-decision problems
https://resolver.caltech.edu/CaltechAUTHORS:LORams72
Authors: Lorden, Gary
Year: 1972
Sequential tests of separated hypotheses concerning the parameter θ of a Koopman-Darmois family are studied from the point of view of minimizing expected sample sizes pointwise in θ subject to error probability bounds. Sequential versions of the (generalized) likelihood ratio test are shown to exceed the minimum expected sample sizes by at most M log log α(-1) uniformly in θ, where α is the smallest error probability bound. The proof considers the likelihood ratio tests as ensembles of sequential probability ratio tests and compares them with alternative procedures by constructing alternative ensembles, applying a simple inequality of Wald and a new inequality of similar type. A heuristic approximation is given for the error probabilities of likelihood ratio tests, which provides an upper bound in the case of a normal mean.https://authors.library.caltech.edu/records/1hmpc-2tm14Open-ended tests for Koopman-Darmois families
https://resolver.caltech.edu/CaltechAUTHORS:LORas73
Authors: Lorden, Gary
Year: 1973
The generalized likelihood ratio is used to define a stopping rule for rejecting the null hypothesis θ = θ0 in favor of θ > θ0. Subject to a bound α on the probability of ever stopping in case θ = θ0, the expected sample sizes for θ > θ0 are minimized within a multiple of log log α-1, the multiple depending on θ. An heuristic bound on the error probability of a likelihood ratio procedure is derived and verified in the case of a normal mean by consideration of a Wiener process. Useful lower bounds on the small-sample efficiency in the normal case are thereby obtained.https://authors.library.caltech.edu/records/hjmqs-hx3122-SPRT'S and the modified Kiefer-Weiss problem of minimizing an expected sample size
https://resolver.caltech.edu/CaltechAUTHORS:LORas76
Authors: Lorden, Gary
Year: 1976
A simple combination of one-sided sequential probability ratio tests, called a 2-SPRT, is shown to approximately minimize the expected sample size at a given point θ0 among all tests with error probabilities controlled at two other points, θ1 and θ2. In the symmetric normal and binomial testing problems, this result applies directly to the Kiefer-Weiss problem of minimizing the maximum over θ of the expected sample size. Extensive computer calculations for the normal case indicate that 2-SPRT's have efficiencies greater than 99% regardless of the size of the error probabilities. Accurate approximations to the error probabilities and expected sample sizes of these tests are given.https://authors.library.caltech.edu/records/wdgrf-wjt75Nearly-optimal sequential tests for finitely many parameter values
https://resolver.caltech.edu/CaltechAUTHORS:LORas77
Authors: Lorden, Gary
Year: 1977
Combinations of one-sided sequential probability ratio tests (SPRT's) are shown to be "nearly optimal" for problems involving a finite number of possible underlying distributions. Subject to error probability constraints, expected sample sizes (or weighted averages of them) are minimized to within o(1) asymptotically. For sequential decision problems, simple explicit procedures are proposed which "do exactly what a Bayes solution would do" with probability approaching one as the cost per observation, c, goes to zero. Exact computations for a binomial testing problem show that efficiencies of about 97% are obtained in some "small-sample" cases.https://authors.library.caltech.edu/records/1cgah-f1743Asymptotic efficiency of three-stage hypothesis tests
https://resolver.caltech.edu/CaltechAUTHORS:LORas83
Authors: Lorden, Gary
Year: 1983
Multi-stage hypothesis tests are studied as competitors of sequential tests. A class of three-stage tests for the one-dimensional exponential family is shown to be asymptotically efficient, whereas two-stage tests are not. Moreover, in order to be asymptotically optimal, three-stage tests must mimic the behavior of sequential tests. Similar results are obtained for the problem of testing two simple hypotheses.https://authors.library.caltech.edu/records/vw1dk-4j810Node Synchronization for the Viterbi Decoder
https://resolver.caltech.edu/CaltechAUTHORS:LORieeetc84
Authors: Lorden, Gary; McEliece, Robert J.; Swanson, Laif
Year: 1984
DOI: 10.1109/TCOM.1984.1096098
Motivated by the needs of NASA's Voyager 2 mission, in this paper we describe an algorithm which detects and corrects losses of node synchronization in convolutionally encoded data. This algorithm, which would be implemented as a hardware device external to a Viterbi decoder, makes statistical decisions about node synch based on the hard-quantized undecoded data stream. We will show that in a worst-case Voyager environment, our method will detect and correct a true loss of synch (thought to be a very rare event) within several hundred bits; many of the resulting outages will be corrected by the outer Reed-Solomon code. At the same time, the mean time between false alarms is on the order of several years, independent of the signal-to-noise ratio.https://authors.library.caltech.edu/records/jg43f-1dq58A control problem arising in the sequential design of experiments
https://resolver.caltech.edu/CaltechAUTHORS:LALap86
Authors: Lalley, S. P.; Lorden, G.
Year: 1986
The Pele problem. Starting from an initial point x not in his playing field, a football player is to dribble onto the field. Due to irregularities in the surface on which the player is dribbling, and perhaps also to small inconsistencies in his kick, the movement of the ball has a "random" component; moreover, a kick with the left foot tends to have a somewhat different effect than a kick with the right foot. The player's objective is to move the ball onto the playing field with as few kicks as possible.https://authors.library.caltech.edu/records/5f231-2cv46Nonanticipating estimation applied to sequential analysis and changepoint detection
https://resolver.caltech.edu/CaltechAUTHORS:LORas05
Authors: Lorden, Gary; Pollak, Moshe
Year: 2005
DOI: 10.1214/009053605000000183
Suppose a process yields independent observations whose distributions belong to a family parameterized by theta E Theta. When the process is in control, the observations are i.i.d. with a known parameter value theta(0). When the process is out of control, the parameter changes. We apply an idea of Robbins and Siegmund [Proc. Sixth Berkeley Symp. Math. Statist. Probab. 4 (1972) 37-41] to construct a class of sequential tests and detection schemes whereby the unknown post-change parameters are estimated. This approach is especially useful in situations where the parametric space is intricate and mixture-type rules are operationally or conceptually difficult to formulate. We exemplify our approach by applying it to the problem of detecting a change in the shape parameter of a Gamma distribution, in both a univariate and a multivariate setting.https://authors.library.caltech.edu/records/0n449-snx64Sequential Change-Point Detection Procedures That are Nearly Optimal and Computationally Simple
https://resolver.caltech.edu/CaltechAUTHORS:20180810-154810845
Authors: Lorden, Gary; Pollak, Moshe
Year: 2008
DOI: 10.1080/07474940802446244
Sequential schemes for detecting a change in distribution often require that all of the observations be stored in memory. Lai (1995, Journal of Royal Statistical Society, Series B 57 : 613 – 658) proposed a class of detection schemes that enable one to retain a finite window of the most recent observations, yet promise first-order optimality. The asymptotics are such that the window size is asymptotically unbounded. We argue that what's of computational importance isn't having a finite window of observations, but rather making do with a finite number of registers. We illustrate in the context of detecting a change in the parameter of an exponential family that one can achieve eventually even second-order asymptotic optimality through using only three registers for storing information of the past. We propose a very simple procedure, and show by simulation that it is highly efficient for typical applications.https://authors.library.caltech.edu/records/w1t8w-cq817Optimal and Fast Confidence Intervals for Hypergeometric Successes
https://resolver.caltech.edu/CaltechAUTHORS:20221110-430693700.13
Authors: Bartroff, Jay; Lorden, Gary; Wang, Lijia
Year: 2022
DOI: 10.1080/00031305.2022.2128421
We present an efficient method of calculating exact confidence intervals for the hypergeometric parameter representing the number of "successes," or "special items," in the population. The method inverts minimum-width acceptance intervals after shifting them to make their endpoints nondecreasing while preserving their level. The resulting set of confidence intervals achieves minimum possible average size, and even in comparison with confidence sets not required to be intervals it attains the minimum possible cardinality most of the time, and always within 1. The method compares favorably with existing methods not only in the size of the intervals but also in the time required to compute them. The available R package hyperMCI implements the proposed method.https://authors.library.caltech.edu/records/381y5-r0n40