CaltechAUTHORS: Article
https://feeds.library.caltech.edu/people/Mazumdar-Eric/article.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 13 Aug 2024 19:11:28 -0700On Gradient-Based Learning in Continuous Games
https://resolver.caltech.edu/CaltechAUTHORS:20210907-200115513
Year: 2020
DOI: 10.1137/18m1231298
We introduce a general framework for competitive gradient-based learning that encompasses a wide breadth of multiagent learning algorithms, and analyze the limiting behavior of competitive gradient-based learning algorithms using dynamical systems theory. For both general-sum and potential games, we characterize a nonnegligible subset of the local Nash equilibria that will be avoided if each agent employs a gradient-based learning algorithm. We also shed light on the issue of convergence to non-Nash strategies in general- and zero-sum games, which may have no relevance to the underlying game, and arise solely due to the choice of algorithm. The existence and frequency of such strategies may explain some of the difficulties encountered when using gradient descent in zero-sum games as, e.g., in the training of generative adversarial networks. To reinforce the theoretical contributions, we provide empirical results that highlight the frequency of linear quadratic dynamic games (a benchmark for multiagent reinforcement learning) that admit global Nash equilibria that are almost surely avoided by policy gradient.https://resolver.caltech.edu/CaltechAUTHORS:20210907-200115513Inverse Risk-Sensitive Reinforcement Learning
https://resolver.caltech.edu/CaltechAUTHORS:20210903-222215724
Year: 2020
DOI: 10.1109/TAC.2019.2926674
This work addresses the problem of inverse reinforcement learning in Markov decision processes where the decision-making agent is risk-sensitive. In particular, a risk-sensitive reinforcement learning algorithm with convergence guarantees that makes use of coherent risk metrics and models of human decision-making which have their origins in behavioral psychology and economics is presented. The risk-sensitive reinforcement learning algorithm provides the theoretical underpinning for a gradient-based inverse reinforcement learning algorithm that seeks to minimize a loss function defined on the observed behavior. It is shown that the gradient of the loss function with respect to the model parameters is well defined and computable via a contraction map argument. Evaluation of the proposed technique is performed on a Grid World example, a canonical benchmark problem.https://resolver.caltech.edu/CaltechAUTHORS:20210903-222215724Convergence Analysis of Gradient-Based Learning in Continuous Games
https://resolver.caltech.edu/CaltechAUTHORS:20210907-195235166
Year: 2020
Considering a class of gradient-based multi-agent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their gradient and 2) stochastic settings with an unbiased estimator of their gradient. We also study the effects of non-uniform learning rates, which causes a distortion of the vector field that can alter which equilibrium the agents converge to and the path they take. We support the analysis with numerical examples that provide insight into how one might synthesize games to achieve desired equilibria.https://resolver.caltech.edu/CaltechAUTHORS:20210907-195235166Langevin Monte Carlo for Contextual Bandits
https://resolver.caltech.edu/CaltechAUTHORS:20220714-212437915
Year: 2022
DOI: 10.48550/arXiv.arXiv.2206.11254
We study the efficiency of Thompson sampling for contextual bandits. Existing Thompson sampling-based algorithms need to construct a Laplace approximation (i.e., a Gaussian distribution) of the posterior distribution, which is inefficient to sample in high dimensional applications for general covariance matrices. Moreover, the Gaussian approximation may not be a good surrogate for the posterior distribution for general reward generating functions. We propose an efficient posterior sampling algorithm, viz., Langevin Monte Carlo Thompson Sampling (LMC-TS), that uses Markov Chain Monte Carlo (MCMC) methods to directly sample from the posterior distribution in contextual bandits. Our method is computationally efficient since it only needs to perform noisy gradient descent updates without constructing the Laplace approximation of the posterior distribution. We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits, viz., linear contextual bandits. We conduct experiments on both synthetic data and real-world datasets on different contextual bandit models, which demonstrates that directly sampling from the posterior is both computationally efficient and competitive in performance.https://resolver.caltech.edu/CaltechAUTHORS:20220714-212437915