Book Section records
https://feeds.library.caltech.edu/people/Gregoire-J-M/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenWed, 29 Nov 2023 17:09:07 +0000Advanced and In Situ Analytical Methods for Solar Fuel Materials
https://resolver.caltech.edu/CaltechAUTHORS:20151027-135221593
Authors: Chan, Candace K.; Tüysüz, Harun; Braun, Artur; Ranjan, Chinmoy; La Mantia, Fabio; Miller, Benjamin K.; Zhang, Liuxian; Crozier, Peter A.; Haber, Joel A.; Gregoire, John M.; Park, Hyun S.; Batchellor, Adam S.; Trotochaud, Lena; Boettcher, Shannon W.
Year: 2015
DOI: 10.1007/128_2015_650
In situ and operando techniques can play important roles in the development of better performing photoelectrodes, photocatalysts, and electrocatalysts by helping to elucidate crucial intermediates and mechanistic steps. The development of high throughput screening methods has also accelerated the evaluation of relevant photoelectrochemical and electrochemical properties for new solar fuel materials. In this chapter, several in situ and high throughput characterization tools are discussed in detail along with their impact on our understanding of solar fuel materials.https://authors.library.caltech.edu/records/v2jf9-y3526High Throughput Combinatorial Experimentation + Informatics = Combinatorial Science
https://resolver.caltech.edu/CaltechAUTHORS:20170718-103549107
Authors: Suram, Santosh K.; Pesenson, Meyer Z.; Gregoire, John M.
Year: 2015
DOI: 10.1007/978-3-319-23871-5_14
Many present, emerging and future technologies rely upon the development high performance functional materials. For a given application, the performance of materials containing 1 or 2 elements from the periodic table have been evaluated using traditional techniques, and additional materials complexity is required to continue the development of advanced materials, for example through the incorporation of several elements into a single material. The combinatorial aspect of combining several elements yields vast composition spaces that can be effectively explored with high throughput techniques. State of the art high throughput experiments produce data which are multivariate, high-dimensional, and consist of wide ranges of spatial and temporal scales. We present an example of such data in the area of water splitting electrocatalysis and describe recent progress on 2 areas of interpreting such vast, complex datasets. We discuss a genetic programming technique for automated identification of composition-property trends, which is important for understanding the data and crucial in identifying representative compositions for further investigation. By incorporating such an algorithm in a high throughput experimental pipeline, the automated down-selection of samples can empower a highly efficient tiered screening platform. We also discuss some fundamental mathematics of composition spaces, where compositional variables are non-Euclidean due to the constant-sum constraint. We describe the native simplex space spanned by composition variables and provide illustrative examples of statistics and interpolation within this space. Through further development of machine learning algorithms and their prudent implementation in the simplex space, the data informatics community will establish methods that derive the most knowledge from high throughput materials science data.https://authors.library.caltech.edu/records/9nwsz-4tt75Relaxation Methods for Constrained Matrix Factorization Problems: Solving the Phase Mapping Problem in Materials Discovery
https://resolver.caltech.edu/CaltechAUTHORS:20170711-130810289
Authors: Bai, Junwen; Bjorck, Johan; Xue, Yexiang; Suram, Santosh K.; Gregoire, John; Gomes, Carla
Year: 2017
DOI: 10.1007/978-3-319-59776-8_9
Matrix factorization is a robust and widely adopted technique in data science, in which a given matrix is decomposed as the product of low rank matrices. We study a challenging constrained matrix factorization problem in materials discovery, the so-called phase mapping problem. We introduce a novel "lazy" Iterative Agile Factor Decomposition (IAFD) approach that relaxes and postpones non-convex constraint sets (the lazy constraints), iteratively enforcing them when violations are detected. IAFD interleaves multiplicative gradient-based updates with efficient modular algorithms that detect and repair constraint violations, while still ensuring fast run times. Experimental results show that IAFD is several orders of magnitude faster and its solutions are also in general considerably better than previous approaches. IAFD solves a key problem in materials discovery while also paving the way towards tackling constrained matrix factorization problems in general, with broader implications for data science.https://authors.library.caltech.edu/records/1zypa-kz644High Throughput Experimentation for the Discovery of Water Splitting Materials
https://resolver.caltech.edu/CaltechAUTHORS:20180910-110703060
Authors: Gregoire, John M.; Boyd, David A.; Guevarra, Dan; Haber, Joel A.; Jones, Ryan; Kan, Kevin; Marcin, Martin; Newhouse, Paul F.; Shinde, Aniketa; Soedarmadji, Edwin; Suram, Santosh K.; Zhou, Lan
Year: 2018
DOI: 10.1039/9781788010313-00305
High throughput experimentation is a powerful approach for accelerating materials discovery, particularly when embedded within a larger research effort providing clear guidance as to technologically relevant device operating conditions and in which discovered materials can be rapidly validated, further investigated, and incorporated into devices. In this chapter we provide an overview of high throughput pipelines developed to discover solar fuels materials, with particular attention given to electrocatalysts and photoelectrocatalysts for the oxygen evolution reaction. The description of the pipelines details our philosophy that experiment throughput must be contingent on establishing high data quality, which is embodied by our strategic choices of synthesis, screening, characterization, and data management techniques. This account of high throughput discovery of solar fuels materials provides a template for designing high throughput pipelines for mission-driven science research.https://authors.library.caltech.edu/records/6p7zm-zh564An Efficient Relaxed Projection Method for Constrained Non-negative Matrix Factorization with Application to the Phase-Mapping Problem in Materials Science
https://resolver.caltech.edu/CaltechAUTHORS:20180607-155251991
Authors: Bai, Junwen; Ament, Sebastian; Perez, Guillaume; Gregoire, John; Gomes, Carla
Year: 2018
DOI: 10.1007/978-3-319-93031-2_4
In recent years, a number of methods for solving the constrained non-negative matrix factorization problem have been proposed. In this paper, we propose an efficient method for tackling the ever increasing size of real-world problems. To this end, we propose a general relaxation and several algorithms for enforcing constraints in a challenging application: the phase-mapping problem in materials science. Using experimental data we show that the proposed method significantly outperforms previous methods in terms of ℓ_2-norm error and speed.https://authors.library.caltech.edu/records/5h9we-pgc55Imitation Refinement for X-ray Diffraction Signal Processing
https://resolver.caltech.edu/CaltechAUTHORS:20190425-082002655
Authors: Bai, Junwen; Lai, Zihang; Yang, Runzhe; Xue, Yexiang; Gregoire, John; Gomes, Carla
Year: 2019
DOI: 10.1109/ICASSP.2019.8683723
Many real-world tasks involve identifying signals from data satisfying background or prior knowledge. In domains like materials discovery, due to the flaws and biases in raw experimental data, the identification of X-ray diffraction (XRD) signals often requires significant (manual) expert work to find refined signals that are similar to the ideal theoretical ones. Automatically refining the raw XRD signals utilizing simulated theoretical data is thus desirable. We propose imitation refinement, a novel approach to refine imperfect input signals, guided by a pre-trained classifier incorporating prior knowledge from simulated theoretical data, such that the refined signals imitate the ideal ones. The classifier is trained on the ideal simulated data to classify signals and learns an embedding space where each class is represented by a prototype. The refiner learns to refine the imperfect signals with small modifications, such that their embeddings are closer to the corresponding prototypes. We show that the refiner can be trained in both supervised and unsupervised fashions. We further illustrate the effectiveness of the proposed approach both qualitatively and quantitatively in an X-ray diffraction signal refinement task in materials discovery.https://authors.library.caltech.edu/records/c4n6n-ze316