Article records
https://feeds.library.caltech.edu/people/Abu-Mostafa-Y-S/article.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 13:15:21 +0000A Differentiation Test for Absolute Convergence
https://resolver.caltech.edu/CaltechAUTHORS:20190710-151636497
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1984
DOI: 10.2307/2689682
In this note, we describe a new test which provides a necessary and sufficient condition for absolute convergence of infinite series. The test is based solely on differentiation and is very easy to apply. It also provides a pictorial illustration for absolute convergence and divergence.https://authors.library.caltech.edu/records/zqbc3-95g71Recognitive Aspects of Moment Invariants
https://resolver.caltech.edu/CaltechAUTHORS:20190702-142428282
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'Psaltis-D', 'name': {'family': 'Psaltis', 'given': 'Demetri'}}]}
Year: 1984
DOI: 10.1109/tpami.1984.4767594
Moment invariants are evaluated as a feature space for pattern recognition in terms of discrimination power and noise tolerance. The notion of complex moments is introduced as a simple and straightforward way to derive moment invariants. Through this relation, properties of complex moments are used to characterize moment invariants. Aspects of information loss, suppression, and redundancy encountered in moment invariants are investigated and significant results are derived. The behavior of moment invariants in the presence of additive noise is also described.https://authors.library.caltech.edu/records/3hgbt-rz712Image Normalization by Complex Moments
https://resolver.caltech.edu/CaltechAUTHORS:20190702-135608926
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'Psaltis-D', 'name': {'family': 'Psaltis', 'given': 'Demetri'}}]}
Year: 1985
DOI: 10.1109/tpami.1985.4767617
The role of moments in image normalization and invariant pattern recognition is addressed. The classical idea of the principal axes is analyzed and extended to a more general definition. The relationship between moment-based normalization, moment invariants, and circular harmonics is established. Invariance properties of moments, as opposed to their recognition properties, are identified using a new class of normalization procedures. The application of moment-based normalization in pattern recognition is demonstrated by experiment.https://authors.library.caltech.edu/records/ggy16-dv650Information capacity of the Hopfield model
https://resolver.caltech.edu/CaltechAUTHORS:ABUieeetit85
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'St-Jacques-J-M', 'name': {'family': 'St. Jacques', 'given': 'Jeannine-Marie'}}]}
Year: 1985
DOI: 10.1109/TIT.1985.1057069
The information capacity of general forms of memory is formalized. The number of bits of information that can be stored in the Hopfield model of associative memory is estimated. It is found that the asymptotic information capacity of a Hopfield network of N neurons is of the order N^3b. The number of arbitrary state vectors that can be made stable in a Hopfield network of N neurons is proved to be bounded above by N.https://authors.library.caltech.edu/records/qs9be-eqg96The complexity of information extraction
https://resolver.caltech.edu/CaltechAUTHORS:ABUieeetit86
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1986
How difficult are decision problems based on natural data, such as pattern recognition? To answer this question, decision problems are characterized by introducing four measures defined on a Boolean function f of N variables: the implementation cost C(f), the randomness R(f), the deterministic entropy H(f), and the complexity K(f). The highlights and main results are roughly as follows, 1) C(f) approx R(f) H(f) approx K(f), all measured in bits. 2) Decision problems based on natural data are partially random (in the Kolmogorov sense) and have low entropy with respect to their dimensionality, and the relations between the four measures translate to lower and upper bounds on the cost of solving these problems. 3) Allowing small errors in the implementation of f saves a lot in the low entropy case but saves nothing in the high-entropy case. If f is partially structured, the implementation cost is reduced substantially.https://authors.library.caltech.edu/records/2zsw2-m3b78On the Time-Bandwidth Proof in VLSI Complexity
https://resolver.caltech.edu/CaltechAUTHORS:ABUieeetc87
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1987
DOI: 10.1109/TC.1987.1676888
A subtle fallacy in the original proof [1] that the computation time T is lowerbounded by a factor inversely proportional to the minimum bisection width of a VLSI chip is pointed out. A corrected version of the proof using the idea of conditionally self-delimiting messages is given.https://authors.library.caltech.edu/records/p2p4x-mbj36Optical Neural Computers
https://resolver.caltech.edu/CaltechAUTHORS:20190710-142228887
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'Psaltis-D', 'name': {'family': 'Psaltis', 'given': 'Demetri'}}]}
Year: 1987
Can computers be built to solve problems, such as recognizing patterns, that entail memorizing all possible solutions? The key may be to arrange optical elements in the same way as neurons are arranged in the brain.https://authors.library.caltech.edu/records/twbxx-4z132The capacity of multilevel threshold functions
https://resolver.caltech.edu/CaltechAUTHORS:OLAieeetpami88
Authors: {'items': [{'id': 'Olafsson-S', 'name': {'family': 'Olafsson', 'given': 'Sverrir'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1988
DOI: 10.1109/34.3890
Lower and upper bounds for the capacity of multilevel threshold elements are estimated, using two essentially different enumeration techniques. It is demonstrated that the exact number of multilevel threshold functions depends strongly on the relative topology of the input set. The results correct a previously published estimate and indicate that adding threshold levels enhances the capacity more than adding variables.https://authors.library.caltech.edu/records/86mkv-00m71The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning
https://resolver.caltech.edu/CaltechAUTHORS:ABUnc89
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1989
DOI: 10.1162/neco.1989.1.3.312
When feasible, learning is a very attractive alternative to explicit programming. This is particularly true in areas where the problems do not lend themselves to systematic programming, such as pattern recognition in natural environments. The feasibility of learning an unknown function from examples depends on two questions:
1. Do the examples convey enough information to determine the function?
2. Is there a speedy way of constructing the function from the examples?
These questions contrast the roles of information and complexity in learning. While the two roles share some ground, they are conceptually and technically different. In the common language of learning, the information question is that of generalization and the complexity question is that of scaling. The work of Vapnik and Chervonenkis (1971) provides the key tools for dealing with the information issue. In this review, we develop the main ideas of this framework and discuss how complexity fits in.https://authors.library.caltech.edu/records/bz3m3-qjx13Information theory, complexity and neural networks
https://resolver.caltech.edu/CaltechAUTHORS:ABUieeecm89
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1989
DOI: 10.1109/35.41397
Some of the main results in the mathematical evaluation of neural networks as information processing systems are discussed. The basic operation of feedback and feed-forward neural networks is described. Their memory capacity and computing power are considered. The concept of learning by example as it applies to neural networks is examined.https://authors.library.caltech.edu/records/me6t9-pfb15An analog feedback associative memory
https://resolver.caltech.edu/CaltechAUTHORS:ATIieeetnn93
Authors: {'items': [{'id': 'Atiya-A-F', 'name': {'family': 'Atiya', 'given': 'Amir'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1993
DOI: 10.1109/72.182701
A method for the storage of analog vectors, i.e., vectors whose components are real-valued, is developed for the Hopfield continuous-time network. An important requirement is that each memory vector has to be an asymptotically stable (i.e. attractive) equilibrium of the network. Some of the limitations imposed by the continuous Hopfield model on the set of vectors that can be stored are pointed out. These limitations can be relieved by choosing a network containing visible as well as hidden units. An architecture consisting of several hidden layers and a visible layer, connected in a circular fashion, is considered. It is proved that the two-layer case is guaranteed to store any number of given analog vectors provided their number does not exceed 1 + the number of neurons in the hidden layer. A learning algorithm that correctly adjusts the locations of the equilibria and guarantees their asymptotic stability is developed. Simulation results confirm the effectiveness of the approach.https://authors.library.caltech.edu/records/e7r25-pdp23Hints and the VC Dimension
https://resolver.caltech.edu/CaltechAUTHORS:ABUnc93
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1993
DOI: 10.1162/neco.1993.5.2.278
Learning from hints is a generalization of learning from examples that allows for a variety of information about the unknown function to be used in the learning process. In this paper, we use the VC dimension, an established tool for analyzing learning from examples, to analyze learning from hints. In particular, we show how the VC dimension is affected by the introduction of a hint. We also derive a new quantity that defines a VC dimension for the hint itself. This quantity is used to estimate the number of examples needed to "absorb" the hint. We carry out the analysis for two types of hints, invariances and catalysts. We also describe how the same method can be applied to other types of hints.https://authors.library.caltech.edu/records/ynexf-s1764Hints
https://resolver.caltech.edu/CaltechAUTHORS:ABUnc95
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1995
DOI: 10.1162/neco.1995.7.4.639
The systematic use of hints in the learning-from-examples paradigm is the subject of this review. Hints are the properties of the target function that are known to us independently of the training examples. The use of hints is tantamount to combining rules and data in learning, and is compatible with different learning models, optimization techniques, and regularization techniques. The hints are represented to the learning process by virtual examples, and the training examples of the target function are treated on equal footing with the rest of the hints. A balance is achieved between the information provided by the different hints through the choice of objective functions and learning schedules. The Adaptive Minimization algorithm achieves this balance by relating the performance on each hint to the overall performance. The application of hints in forecasting the very noisy foreign-exchange markets is illustrated. On the theoretical side, the information value of hints is contrasted to the complexity value and related to the VC dimension.https://authors.library.caltech.edu/records/0b77f-pma41Introduction to financial forecasting
https://resolver.caltech.edu/CaltechAUTHORS:20190628-091500494
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'Atiya-A-F', 'name': {'family': 'Atiya', 'given': 'Amir F.'}}]}
Year: 1996
DOI: 10.1007/bf00126626
This paper provides a brief introduction to forecasting in financial markets with emphasis on commodity futures and foreign exchange. We describe the basic approaches to forecasting, and discuss the noisy nature of financial data. Using neural networks as a learning paradigm, we describe different techniques for choosing the inputs, outputs, and error function. We also describe the learning from hints technique that augments the standard learning from examples method. We demonstrate the use of hints in foreign-exchange trading of the U.S. Dollar versus the British Pound, the German Mark, the Japanese Yen, and the Swiss Franc, over a period of 32 months. The paper does not assume a background in financial markets.https://authors.library.caltech.edu/records/yny48-z8064Validation of volatility models
https://resolver.caltech.edu/CaltechAUTHORS:20170408-150548267
Authors: {'items': [{'id': 'Magdon-Ismail-M', 'name': {'family': 'Magdon-Ismail', 'given': 'Malik'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1998
DOI: 10.1002/(SICI)1099-131X(1998090)17:5/6<349::AID-FOR701>3.0.CO;2-X
In forecasting a financial time series, the mean prediction can be validated by direct comparison with the value of the series. However, the volatility or variance can only be validated by indirect means such as the likelihood function. Systematic errors in volatility prediction have an 'economic value' since volatility is a tradable quantity (e.g. in options and other derivatives) in addition to being a risk measure. We analyse the fidelity of the likelihood function as a means of training (in sample) and validating (out of sample) a volatility model. We report several cases where the likelihood function leads to an erroneous model. We correct for this error by scaling the volatility prediction using a predetermined factor that depends on the number of data points.https://authors.library.caltech.edu/records/9765t-xrn98Financial markets: very noisy information processing
https://resolver.caltech.edu/CaltechAUTHORS:MAGprocieee98
Authors: {'items': [{'id': 'Magdon-Ismail-M', 'name': {'family': 'Magdon-Ismail', 'given': 'Malik'}}, {'id': 'Nicholson-A', 'name': {'family': 'Nicholson', 'given': 'Alexander'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 1998
DOI: 10.1109/5.726786
We report new results about the impact of noise on information processing with application to financial markets. These results quantify the tradeoff between the amount of data and the noise level in the data. They also provide estimates for the performance of a learning system in terms of the noise level. We use these results to derive a method for detecting the change in market volatility from period to period. We successfully apply these results to the four major foreign exchange (FX) markets. The results hold for linear as well as nonlinear learning models and algorithms and for different noise models.https://authors.library.caltech.edu/records/ght0h-qtr28No Free Lunch for Early Stopping
https://resolver.caltech.edu/CaltechAUTHORS:20111201-140641206
Authors: {'items': [{'id': 'Çataltepe-Zehra-Kök', 'name': {'family': 'Çataltepe', 'given': 'Zehra'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'Magdon-Ismail-M', 'name': {'family': 'Magdon-Ismail', 'given': 'Malik'}}]}
Year: 1999
DOI: 10.1162/089976699300016557
We show that with a uniform prior on models having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error.https://authors.library.caltech.edu/records/dy91j-tnb64Maximal codeword lengths in Huffman codes
https://resolver.caltech.edu/CaltechAUTHORS:20190710-133740050
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Y. S.'}}, {'id': 'McEliece-R-J', 'name': {'family': 'McEliece', 'given': 'R. J.'}}]}
Year: 2000
DOI: 10.1016/S0898-1221(00)00119-X
In this paper, we consider the following question about Huffman coding, which is an important technique for compressing data from a discrete source. If p is the smallest source probability, how long, in terms of p, can the longest Huffman codeword be? We show that if p is in the range 0 < p ≤12, and if K is the unique index such that 1F_(K+3)< p ≤1F_(K+2), where F_K denotes the Kth Fibonacci number, then the longest Huffman codeword for a source whose least probability is p is at most K, and no better bound is possible. Asymptotically, this implies the surprising fact that for small values of p, a Huffman code's longest codeword can be as much as 44% larger than that of the corresponding Shannon code.https://authors.library.caltech.edu/records/s8rx2-m1v33Minimizing memory loss in learning a new environment
https://resolver.caltech.edu/CaltechAUTHORS:20190702-153115049
Authors: {'items': [{'id': 'Al-Mashouq-K', 'name': {'family': 'Al-Mashouq', 'given': 'Khalid'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser'}}, {'id': 'Al-Ghoneim-K', 'name': {'family': 'Al-Ghoneim', 'given': 'Khaled'}}]}
Year: 2001
DOI: 10.1016/s0925-2312(01)00400-3
Human and other living species can learn new concepts without losing the old ones. On the other hand, artificial neural networks tend to "forget" old concepts. In this paper, we present three methods to minimize the loss of the old information. These methods are analyzed and compared for the linear model. In particular, a method called network sampling is shown to be optimal under certain condition on the sampled data distribution. We also show how to apply these methods in the nonlinear models.https://authors.library.caltech.edu/records/47ngp-06v03Introduction to the special issue on neural networks in financial engineering
https://resolver.caltech.edu/CaltechAUTHORS:ABUieeetnn01a
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'Atiya-A-F', 'name': {'family': 'Atiya', 'given': 'Amir F.'}}, {'id': 'Magdon-Ismail-M', 'name': {'family': 'Magdon-Ismail', 'given': 'Malik'}}, {'id': 'White-H', 'name': {'family': 'White', 'given': 'Halbert'}}]}
Year: 2001
DOI: 10.1109/TNN.2001.935079
There are several phases that an emerging field goes through before it reaches maturity, and computational finance is no exception. There is usually a trigger for the birth of the field. In our case, new techniques such as neural networks, significant progress in computing technology, and the need for results that rely on more realistic assumptions inspired new researchers to revisit the traditional problems of finance, problems that have often been tackled by introducing simplifying assumptions in the past. The result has been a wealth of new approaches to these time-honored problems, with significant improvements in many cases.https://authors.library.caltech.edu/records/n558f-fsb09Financial model calibration using consistency hints
https://resolver.caltech.edu/CaltechAUTHORS:ABUieeetnn01b
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 2001
DOI: 10.1109/72.935092
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.https://authors.library.caltech.edu/records/7hwe0-aqc33On the Maximum Drawdown of a Brownian Motion
https://resolver.caltech.edu/CaltechAUTHORS:20190702-143758047
Authors: {'items': [{'id': 'Magdon-Ismail-M', 'name': {'family': 'Magdon-Ismail', 'given': 'Malik'}}, {'id': 'Atiya-A-F', 'name': {'family': 'Atiya', 'given': 'Amir F.'}}, {'id': 'Pratap-A', 'name': {'family': 'Pratap', 'given': 'Amrit'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 2004
DOI: 10.1239/jap/1077134674
The maximum drawdown at time T of a random process on [0,T] can be defined informally as the largest drop from a peak to a trough. In this paper, we investigate the behaviour of this statistic for a Brownian motion with drift. In particular, we give an infinite series representation of its distribution and consider its expected value. When the drift is zero, we give an analytic expression for the expected value, and for nonzero drift, we give an infinite series representation. For all cases, we compute the limiting (T → ∞) behaviour, which can be logarithmic (for positive drift), square root (for zero drift) or linear (for negative drift).https://authors.library.caltech.edu/records/nx99z-mnz54Learning and Measuring Specialization in Collaborative Swarm Systems
https://resolver.caltech.edu/CaltechAUTHORS:20190702-144335632
Authors: {'items': [{'id': 'Li-Ling', 'name': {'family': 'Li', 'given': 'Ling'}}, {'id': 'Martinoli-A', 'name': {'family': 'Martinoli', 'given': 'Alcherio'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 2004
DOI: 10.1177/105971230401200306
This paper addresses qualitative and quantitative diversity and specialization issues in the framework of self-organizing, distributed, artificial systems. Both diversity and specialization are obtained via distributed learning from initially homogeneous swarms. While measuring diversity essentially quantifies differences among the individuals, assessing the degree of specialization implies correlation between the swarm's heterogeneity with its overall performance. Starting from the stick-pulling experiment in collective robotics, a task that requires the collaboration of two robots, we abstract and generalize in simulation the task constraints to k robots collaborating sequentially or in parallel. We investigate quantitatively the influence of task constraints and types of reinforcement signals on performance, diversity, and specialization in these collaborative experiments. Results show that, though diversity is not explicitly rewarded in our learning algorithm, even in scenarios without explicit communication among agents the swarm becomes specialized after learning. The degrees of both diversity and specialization are affected strongly by environmental conditions and task constraints. While the specialization measure reveals characteristics related to performance and learning in a clearer way than diversity does, the latter measure appears to be less sensitive to different noise conditions and learning parameters.https://authors.library.caltech.edu/records/dbbv3-t8n54Machines that Think for Themselves
https://resolver.caltech.edu/CaltechAUTHORS:20120627-141125402
Authors: {'items': [{'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 2012
DOI: 10.1038/scientificamerican0712-78
New techniques for teaching computers how to learn are beating the experts.https://authors.library.caltech.edu/records/ree7p-z3287Mismatched Training and Test Distributions Can Outperform Matched Ones
https://resolver.caltech.edu/CaltechAUTHORS:20141218-110007751
Authors: {'items': [{'id': 'González-C-R', 'name': {'family': 'González', 'given': 'Carlos R.'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 2015
DOI: 10.1162/NECO_a_00697
In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test distributions in supervised learning can in fact outperform matched distributions in terms of the bottom line, the out-of-sample performance, independent of the target function in question. This surprising result has theoretical and algorithmic ramifications that we discuss.https://authors.library.caltech.edu/records/x442b-zwk70The United States COVID-19 Forecast Hub dataset
https://resolver.caltech.edu/CaltechAUTHORS:20211105-194210514
Authors: {'items': [{'id': 'Cramer-Estee-Y', 'name': {'family': 'Cramer', 'given': 'Estee Y.'}, 'orcid': '0000-0003-1373-3177'}, {'id': 'Huang-Yuxin', 'name': {'family': 'Huang', 'given': 'Yuxin'}}, {'id': 'Wang-Yijin', 'name': {'family': 'Wang', 'given': 'Yijin'}}, {'id': 'Ray-Evan-L', 'name': {'family': 'Ray', 'given': 'Evan L.'}}, {'id': 'Cornell-Matthew', 'name': {'family': 'Cornell', 'given': 'Matthew'}}, {'id': 'Bracher-Johannes', 'name': {'family': 'Bracher', 'given': 'Johannes'}, 'orcid': '0000-0002-3777-1410'}, {'id': 'Brennen-Andrea', 'name': {'family': 'Brennen', 'given': 'Andrea'}}, {'id': 'Castro-Rivadeneira-Alvaro-J', 'name': {'family': 'Castro Rivadeneira', 'given': 'Alvaro J.'}}, {'id': 'Gerding-Aaron', 'name': {'family': 'Gerding', 'given': 'Aaron'}}, {'id': 'House-Katie', 'name': {'family': 'House', 'given': 'Katie'}}, {'id': 'Jayawardena-Dasuni', 'name': {'family': 'Jayawardena', 'given': 'Dasuni'}}, {'id': 'Kanji-Abdul-Hannan', 'name': {'family': 'Kanji', 'given': 'Abdul Hannan'}}, {'id': 'Khandelwal-Ayush', 'name': {'family': 'Khandelwal', 'given': 'Ayush'}}, {'id': 'Le-Khoa', 'name': {'family': 'Le', 'given': 'Khoa'}}, {'id': 'Mody-Vidhi', 'name': {'family': 'Mody', 'given': 'Vidhi'}}, {'id': 'Mody-Vrushti', 'name': {'family': 'Mody', 'given': 'Vrushti'}}, {'id': 'Niemi-Jarad', 'name': {'family': 'Niemi', 'given': 'Jarad'}, 'orcid': '0000-0002-5079-158X'}, {'id': 'Stark-Ariane', 'name': {'family': 'Stark', 'given': 'Ariane'}}, {'id': 'Shah-Apurv', 'name': {'family': 'Shah', 'given': 'Apurv'}}, {'id': 'Wattanachit-Nutcha', 'name': {'family': 'Wattanachit', 'given': 'Nutcha'}}, {'id': 'Zorn-Martha-W', 'name': {'family': 'Zorn', 'given': 'Martha W.'}}, {'id': 'Reich-Nicholas-G', 'name': {'family': 'Reich', 'given': 'Nicholas G.'}, 'orcid': '0000-0003-3503-9899'}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'Bathwal-Rahil', 'name': {'family': 'Bathwal', 'given': 'Rahil'}}, {'id': 'Chang-Nicholas-A', 'name': {'family': 'Chang', 'given': 'Nicholas A.'}}, {'id': 'Chitta-Pavan', 'name': {'family': 'Chitta', 'given': 'Pavan'}}, {'id': 'Erickson-Anne', 'name': {'family': 'Erickson', 'given': 'Anne'}}, {'id': 'Goel-Sumit', 'name': {'family': 'Goel', 'given': 'Sumit'}, 'orcid': '0000-0003-3266-9035'}, {'id': 'Gowda-Jethin', 'name': {'family': 'Gowda', 'given': 'Jethin'}}, {'id': 'Jin-Qixuan', 'name': {'family': 'Jin', 'given': 'Qixuan'}}, {'id': 'Jo-HyeongChan', 'name': {'family': 'Jo', 'given': 'HyeongChan'}}, {'id': 'Kim-Juhyun', 'name': {'family': 'Kim', 'given': 'Juhyun'}}, {'id': 'Kulkarni-Pranav-D', 'name': {'family': 'Kulkarni', 'given': 'Pranav'}, 'orcid': '0000-0002-1461-0948'}, {'id': 'Lushtak-Samuel-M', 'name': {'family': 'Lushtak', 'given': 'Samuel M.'}}, {'id': 'Mann-Ethan', 'name': {'family': 'Mann', 'given': 'Ethan'}}, {'id': 'Popken-Max', 'name': {'family': 'Popken', 'given': 'Max'}}, {'id': 'Soohoo-Connor', 'name': {'family': 'Soohoo', 'given': 'Connor'}}, {'id': 'Tirumala-Kushal', 'name': {'family': 'Tirumala', 'given': 'Kushal'}}, {'id': 'Tseng-Albert', 'name': {'family': 'Tseng', 'given': 'Albert'}}, {'id': 'Varadarajan-Vignesh', 'name': {'family': 'Varadarajan', 'given': 'Vignesh'}}, {'id': 'Vytheeswaran-Jagath', 'name': {'family': 'Vytheeswaran', 'given': 'Jagath'}, 'orcid': '0000-0002-5250-7714'}, {'id': 'Wang-Christopher', 'name': {'family': 'Wang', 'given': 'Christopher'}}, {'id': 'Yeluri-Akshay', 'name': {'family': 'Yeluri', 'given': 'Akshay'}, 'orcid': '0000-0001-8654-1673'}, {'id': 'Yurk-Dominic', 'name': {'family': 'Yurk', 'given': 'Dominic'}, 'orcid': '0000-0002-2276-4189'}, {'id': 'Zhang-Michael', 'name': {'family': 'Zhang', 'given': 'Michael'}}, {'id': 'Zlokapa-Alexander-M', 'name': {'family': 'Zlokapa', 'given': 'Alexander'}, 'orcid': '0000-0002-4153-8646'}]}
Year: 2022
DOI: 10.1038/s41597-022-01517-w
PMCID: PMC8236414
Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages.https://authors.library.caltech.edu/records/ys28f-z5094