Monograph records
https://feeds.library.caltech.edu/people/Çataltepe-Zehra-Kök/monograph.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 14:39:05 +0000The Scheduling Problem in Learning From Hints
https://resolver.caltech.edu/CaltechCSTR:1994.cs-tr-94-09
Authors: {'items': [{'id': 'Çataltepe-Zehra-Kök', 'name': {'family': 'Çataltepe', 'given': 'Zehra'}}]}
Year: 1994
DOI: 10.7907/Z93T9F7B
Any information about the function to be learned is called a hint. Learning from hints is a generalization of learning from examples. In this paradigm, hints are expressed by their examples and then taught to a learning-from-examples system. In general, using other hints in addition to the examples of the function, improves the generalization performance. The scheduling problem in learning from hints is deciding which hint to teach at which time during training. Over- or under- emphasizing a hint may render it useless, making scheduling very important. Fixed and adaptive schedules are two types of schedules that are discussed. Adaptive minimization is a general adaptive schedule that uses an estimate of generalization error in terms of errors on hints. When such an estimate is available, it can also be optimized by means of directly descending on it. An estimate may be used to decide on when to stop training, too. A method to find an estimate incorporating the errors on invariance hints, and simulation results on this estimate, are presented. Two computer programs that provide a learning-from-hints environment and improvements on them are discussed.https://authors.library.caltech.edu/records/fw9qp-m2455The Central Classifier Bound - A New Error Bound for the Classifier Chosen by Early Stopping
https://resolver.caltech.edu/CaltechCSTR:1997.cs-tr-97-08
Authors: {'items': [{'id': 'Bax-E', 'name': {'family': 'Bax', 'given': 'Eric'}}, {'id': 'Çataltepe-Zehra-Kök', 'name': {'family': 'Çataltepe', 'given': 'Zehra'}}, {'id': 'Sill-J', 'name': {'family': 'Sill', 'given': 'Joe'}}]}
Year: 2001
DOI: 10.7907/Z9RB72M1
Training with early stopping is the following process. Partition the in sample data into training and validation sets Begin with a random classifier g_(1-). Use an iterative method to decrease the error rate on the training data. Record the classifier at each iteration producing a series of snapshots g_1....g_M. Evaluate
the error rate of each snapshot over the validation data. Deliver a minimum validation error classifier. g^* as the result of training.https://authors.library.caltech.edu/records/zg5br-fxz14No Free Lunch for Early Stopping
https://resolver.caltech.edu/CaltechCSTR:1998.cs-tr-98-02
Authors: {'items': [{'id': 'Çataltepe-Zehra-Kök', 'name': {'family': 'Çataltepe', 'given': 'Zehra'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}, {'id': 'Magdon-Ismail-M', 'name': {'family': 'Magdon-Ismail', 'given': 'Malik'}}]}
Year: 2001
DOI: 10.7907/Z9B8565P
We show that, with a uniform prior on hypothesis functions having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error. We also show that regularization methods are equivalent to early stopping with certain non-uniform prior on the early stopping solutions.https://authors.library.caltech.edu/records/f2t2r-bk048