Combined Thesis Feed
https://feeds.library.caltech.edu/people/Çataltepe-Zehra-Kök/combined_thesis.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenWed, 07 Feb 2024 05:05:22 +0000The Scheduling Problem in Learning from Hints
https://resolver.caltech.edu/CaltechTHESIS:03272012-100501462
Authors: {'items': [{'email': 'cataltepe@itu.edu.tr', 'id': 'Çataltepe-Zehra-Kök', 'name': {'family': 'Çataltepe', 'given': 'Zehra Kök'}, 'orcid': '0000-0002-9742-5907', 'show_email': 'YES'}]}
Year: 1994
DOI: 10.7907/3zvq-w228
<p>Any information about the function to be learned is called a hint. Learning from hints is a generalization of learning from examples. In this paradigm, hints are expressed by their examples and then taught to a learning-from-examples system. In general, using other hints in addition to the examples of the function, improves the generalization performance.</p>
<p>The scheduling problem in learning from hints is deciding which hint to teach at which time during training. Over- or under- emphasizing a hint may render it useless, making scheduling very important. Fixed and adaptive schedules are two types of schedules that are discussed.</p>
<p>Adaptive minimization is a general adaptive schedule that uses an estimate of generalization error in terms of errors on hints. when such an estimate is available, it can also be optimized by means of directly descending on it. An estimate may be used to decide on when to stop training, too.</p>
<p>A method to find a estimate incorporating the errors on invariance hints, and simulation results on this estimate, are presented. Two computer programs that provide a learning-from-hints environment and improvements on them are discussed.</p>https://thesis.library.caltech.edu/id/eprint/6874Incorporating Input Information into Learning and Augmented Objective Functions
https://resolver.caltech.edu/CaltechETD:etd-10042005-104636
Authors: {'items': [{'email': 'cataltepe@itu.edu.tr', 'id': 'Çataltepe-Zehra-Kök', 'name': {'family': 'Çataltepe', 'given': 'Zehra Kök'}, 'orcid': '0000-0002-9742-5907', 'show_email': 'YES'}]}
Year: 1998
DOI: 10.7907/82JV-3D67
<p>In many applications, some form of input information, such as test inputs or extra inputs, is available. We incorporate input information into learning by an augmented error function, which is an estimator of the out-of-sample error. The augmented error consists of the training error plus an additional term scaled by the augmentation parameter. For general linear models, we analytically show that the augmented solution has smaller out-of-sample error than the least squares solution. For nonlinear models, we devise an algorithm to minimize the augmented error by gradient descent, determining the augmentation parameter using cross validation.</p>
<p>Augmented objective functions also arise when hints are incorporated into learning. We first show that using the invariance hints to estimate the test error, and early stopping on this estimator, results in better solutions than the minimization of the training error. We also extend our algorithm for incorporating input information to the case of learning from hints.</p>
<p>Input information or hints are additional information about the target function. When the only available information is the training set, all the models with the same training error are equally likely to be the target. In that case, we show that early stopping of training at any training error level above the minimum can not decrease the out-of-sample error. Our results are nonasymptotic for general linear models and the bin model, and asymptotic for nonlinear models. When additional information is available, early stopping can help.</p>https://thesis.library.caltech.edu/id/eprint/3913