Book Section records
https://feeds.library.caltech.edu/people/Li-Ling/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenMon, 15 Apr 2024 23:49:01 +0000Emergent Specialization in Swarm Systems
https://resolver.caltech.edu/CaltechAUTHORS:20190702-150156265
Authors: {'items': [{'id': 'Li-Ling', 'name': {'family': 'Li', 'given': 'Ling'}}, {'id': 'Martinoli-A', 'name': {'family': 'Martinoli', 'given': 'Alcherio'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 2002
DOI: 10.1007/3-540-45675-9_43
Distributed learning is the learning process of multiple autonomous agents in a varying environment, where each agent has only partial information about the global task. In this paper, we investigate the influence of different reinforcement signals (local and global) and team diversity (homogeneous and heterogeneous agents) on the learned solutions. We compare the learned solutions with those obtained by systematic search in a simple case study in which pairs of agents have to collaborate in order to solve the task without any explicit communication. The results show that policies which allow teammates to specialize find an adequate diversity of the team and, in general, achieve similar or better performances than policies which force homogeneity. However, in this specific case study, the achieved team performances appear to be independent of the locality or globality of the reinforcement signal.https://authors.library.caltech.edu/records/mqcjd-3wx35Improving Generalization by Data Categorization
https://resolver.caltech.edu/CaltechAUTHORS:20190702-142717858
Authors: {'items': [{'id': 'Li-Ling', 'name': {'family': 'Li', 'given': 'Ling'}}, {'id': 'Pratap-A', 'name': {'family': 'Pratap', 'given': 'Amrit'}}, {'id': 'Lin-Hsuan-Tien', 'name': {'family': 'Lin', 'given': 'Hsuan-Tien'}}, {'id': 'Abu-Mostafa-Y-S', 'name': {'family': 'Abu-Mostafa', 'given': 'Yaser S.'}}]}
Year: 2005
DOI: 10.1007/11564126_19
In most of the learning algorithms, examples in the training set are treated equally. Some examples, however, carry more reliable or critical information about the target than the others, and some may carry wrong information. According to their intrinsic margin, examples can be grouped into three categories: typical, critical, and noisy. We propose three methods, namely the selection cost, SVM confidence margin, and AdaBoost data weight, to automatically group training examples into these three categories. Experimental results on artificial datasets show that, although the three methods have quite different nature, they give similar and reasonable categorization. Results with real-world datasets further demonstrate that treating the three data categories differently in learning can improve generalization.https://authors.library.caltech.edu/records/4q0t9-sqp86Multiclass boosting with repartitioning
https://resolver.caltech.edu/CaltechAUTHORS:20161122-145403527
Authors: {'items': [{'id': 'Li-Ling', 'name': {'family': 'Li', 'given': 'Ling'}}]}
Year: 2006
DOI: 10.1145/1143844.1143916
A multiclass classification problem can be reduced to a collection of binary problems with the aid of a coding matrix. The quality of the final solution, which is an ensemble of base classifiers learned on the binary problems, is affected by both the performance of the base learner and the error-correcting ability of the coding matrix. A coding matrix with strong error-correcting ability may not be overall optimal if the binary problems are too hard for the base learner. Thus a trade-off between error-correcting and base learning should be sought. In this paper, we propose a new multiclass boosting algorithm that modifies the coding matrix according to the learning ability of the base learner. We show experimentally that our algorithm is very efficient in optimizing the multiclass margin cost, and outperforms existing multiclass algorithms such as AdaBoost.ECC and one-vs-one. The improvement is especially significant when the base learner is not very powerful.https://authors.library.caltech.edu/records/1txpz-3sx13Ordinal Regression by Extended Binary Classification
https://resolver.caltech.edu/CaltechAUTHORS:20160315-111243621
Authors: {'items': [{'id': 'Li-Ling', 'name': {'family': 'Li', 'given': 'Ling'}}, {'id': 'Lin-Hsuan-Tien', 'name': {'family': 'Lin', 'given': 'Hsuan-Tien'}}]}
Year: 2007
We present a reduction framework from ordinal regression to binary classification based on extended examples. The framework consists of three steps: extracting
extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a
ranking rule from the binary classifier. A weighted 0/1 loss of the binary classifier would then bound the mislabeling cost of the ranking rule. Our framework
allows not only to design good ordinal regression algorithms based on well-tuned binary classification approaches, but also to derive new generalization bounds for
ordinal regression from known bounds for binary classification. In addition, our framework unifies many existing ordinal regression algorithms, such as perceptron
ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages
in terms of both training speed and generalization performance over existing algorithms, which demonstrates the usefulness of our framework.https://authors.library.caltech.edu/records/24w78-bw040