Book Section records
https://feeds.library.caltech.edu/people/Goodman-R-M/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenThu, 11 Apr 2024 23:36:24 +0000An efficient asynchronous multiplier
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609515
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'McAuley-A-J', 'name': {'family': 'McAuley', 'given': 'Anthony J.'}}]}
Year: 1988
DOI: 10.1109/arrays.1988.18096
An efficient asynchronous serial-parallel multiplier architecture is presented. If offers significant advantages over conventional clocked versions, without some of the drawbacks normally associated with similar asynchronous techniques, such as excessive area. It is shown how a general asynchronous communication element can be designed and illustrated with the CMOS multiplier chip implementation. It is also shown how the multiplier could form the basis for a faster and more robust implementation of the Rivest-Sharmir-Adleman (RSA) public-key cryptosystem.https://authors.library.caltech.edu/records/cry3b-swp85High-capacity exponential associative memories
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609692
Authors: {'items': [{'id': 'Chiueh-Tzi-Dar', 'name': {'family': 'Chiueh', 'given': 'Tzi-Dar'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1988
DOI: 10.1109/icnn.1988.23843
A generalized associative memory model with potentially high capacity is presented. A memory of this kind with M stored vectors of length N, can be implemented with M nonlinear neurons, N ordinary thresholding neurons, and 2MN binary synapses. It is shown that special cases of this model include the Hopfield and high-order correlation memories. A special case of the model, based on a neuron which can implement the subthreshold region, is presented. The authors analyze the capacity of this exponentially associative memory and show that it scales exponentially with N. In any practical realization, however, the dynamic range of the exponentiators is constrained. They show that the capacity for networks with fixed dynamic range exponential circuits is proportional to the dynamic range.https://authors.library.caltech.edu/records/0vx7j-6s990An Information Theoretic Approach to Rule-Based Connectionist Expert Systems
https://resolver.caltech.edu/CaltechAUTHORS:20160107-155547718
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Miller-J-W', 'name': {'family': 'Miller', 'given': 'John W.'}}, {'id': 'Smyth-P', 'name': {'family': 'Smyth', 'given': 'Padhraic'}}]}
Year: 1989
We discuss in this paper architectures for executing probabilistic rule-bases in a parallel
manner, using as a theoretical basis recently introduced information-theoretic
models. We will begin by describing our (non-neural) learning algorithm and theory
of quantitative rule modelling, followed by a discussion on the exact nature of two
particular models. Finally we work through an example of our approach, going from
database to rules to inference network, and compare the network's performance with
the theoretical limits for specific problems.https://authors.library.caltech.edu/records/tz4df-tg596On-Chip ECC for Multi-Level Random Access Memories
https://resolver.caltech.edu/CaltechAUTHORS:20170711-174231167
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Sayano-Masahiro', 'name': {'family': 'Sayano', 'given': 'Masahiro'}}]}
Year: 1989
DOI: 10.1109/ITW.1989.761433
In this talk we investigate a number of on-chip coding techniques for the protection of Random Access Memories which use multi-level as opposed to binary storage cells. The motivation for such RAM cells is of course the storage
of several bits per cell as opposed to one bit per cell [l].
Since the typical number of levels which a multi-level RAM can handle is 16 (the cell being based on a standard DRAM
cell which has varying amounts of voltage stored on
it) there are four bits recorded into each cell [2]. The disadvantage of multi-level RAMs is that they are much
more prone to errors, and so on-chip ECC is essential for reliable operation. There are essentially three reasons for error control coding in multi-level RAMs: To
correct soft errors, to correct hard errors, and to
correct read errors. The source of these errors is,
respectively, alpha particle radiation, hardware faults, and
data level ambiguities. On-chip error correction can be
used to increase the mean life before failure for all three types of errors. Coding schemes can be both bitwise and
cellwise. Bitwise schemes include simple parity checks and SEC-DED codes, either by themselves or as product codes
[3]. Data organization should allow for burst error correction, since alpha particles can wipe out all
four bits in a single cell, and for dense memory chips,
data in surrounding cells as well. This latter effect becomes more serious as feature sizes are scaled, and
a single alpha particle hit affects many adjacent cells. Burst codes such as those in [4] can be used to correct for
these errors. Bitwise coding schemes are more efficient
in correcting read errors, since they can correct single bit
errors and allow the remaining error correction power to be
used elsewhere. Read errors essentially affect one bit
only, since the use of Grey codes for encoding the bits
into the memory cells ensures that at most one bit is flipped with each successive change in level. Cellwise schemes include Reed-Solomon codes, hexadecimal
codes, and product codes. However, simple encoding and decoding algorithms are necessary, since excessive space taken by powerful but complex encoding/decoding circuits can
be offset by having more parity cells and using simpler
codes. These coding techniques are more useful for correcting hard and soft errors which affect the entire cell. They tend to be more complex, and they are not as
efficient in correcting read errors as the bitwise codes.
In the talk we will investigate the suitability and
performance of various multi-level RAM coding schemes,
such as row-column codes, burst codes, hexadecimal codes, Reed-Solomon codes, concatenated codes, and some new majority-logic decodable codes. In particular we investigate their tolerance to soft errors, and to feature size scaling.https://authors.library.caltech.edu/records/mr9cv-dsn56An Information Theoretic Approach to Modeling Neural Network Expert Systems
https://resolver.caltech.edu/CaltechAUTHORS:20170711-165746284
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Miller-J-W', 'name': {'family': 'Miller', 'given': 'John W.'}}, {'id': 'Smyth-P', 'name': {'family': 'Smyth', 'given': 'Padhraic'}}]}
Year: 1989
DOI: 10.1109/ITW.1989.761436
In this paper we propose several novel techniques for mapping rule bases, such as are used in rule based expert systems, onto neural network architectures. Our objective in doing this is to achieve a system capable of incremental learning, and distributed probabilistic inference. Such a system would be capable of performing inference many orders of magnitude faster than current serial rule based expert systems, and hence be capable of true real time operation. In addition, the rule based formalism gives the system an explicit knowledge representation, unlike current neural models. We propose an information-theoretic approach to this problem, which really has two aspects: firstly learning the model and, secondly, performing inference using this model. We will show a clear pathway to implementing an expert system starting from raw data, via a learned rule-based model, to a neural network that performs distributed inference.https://authors.library.caltech.edu/records/t79vm-e8688VLSI Implementation of a High-Capacity Neural Network Associative Memory
https://resolver.caltech.edu/CaltechAUTHORS:20160107-161736867
Authors: {'items': [{'id': 'Chiueh-Tzi-Dar', 'name': {'family': 'Chiueh', 'given': 'Tzi-Dar'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1990
In this paper we describe the VLSI design and testing of a high
capacity associative memory which we call the exponential correlation
associative memory (ECAM). The prototype 3µ-CMOS
programmable chip is capable of storing 32 memory patterns of
24 bits each. The high capacity of the ECAM is partly due to the
use of special exponentiation neurons, which are implemented via
sub-threshold MOS transistors in this design. The prototype chip
is capable of performing one associative recall in 3 µS.https://authors.library.caltech.edu/records/bebxb-cjh49Image smoothing at video rates with analog VLSI
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000424
Authors: {'items': [{'id': 'Moore-A', 'name': {'family': 'Moore', 'given': 'Andrew'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}]}
Year: 1990
DOI: 10.1109/icsmc.1990.142241
Image smoothing is an important computational primitive in both artificial and biological vision systems. A resistive grid forms a suitable substrate for this operation in both types of systems. Previous artificial systems using this substrate form the image for smoothing either with on-chip photoreceptors in real time or with digitally driven input to an analog sample-and-hold system at rates far below the video frame rate. We have designed, fabricated, and successfully tested a subthreshold CMOS analog VLSI chip which, with a minimum of supporting circuitry, can smooth an image formed from a conventional video signal, at the video frame rate.https://authors.library.caltech.edu/records/tz7q7-bze77A VLSI Neural Network for Color Constancy
https://resolver.caltech.edu/CaltechAUTHORS:20160119-151416374
Authors: {'items': [{'id': 'Moore-A', 'name': {'family': 'Moore', 'given': 'Andrew'}}, {'id': 'Allman-J-M', 'name': {'family': 'Allman', 'given': 'John'}}, {'id': 'Fox-Geoffrey', 'name': {'family': 'Fox', 'given': 'Geoffrey'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}]}
Year: 1991
A system for color correction has been designed, built, and tested successfully;
the essential components are three custom chips built using sub-threshold
analog CMOS VLSI. The system, based on Land's Retinex theory
of color constancy, produces colors similar in many respects to those
produced by the visual system. Resistive grids implemented in analog
VLSI perform the smoothing operation central to the algorithm at video
rates. With the electronic system, the strengths and weaknesses of the
algorithm are explored.https://authors.library.caltech.edu/records/chwv7-bnm57Texture Classification Using Information Theory
https://resolver.caltech.edu/CaltechAUTHORS:20170619-165121573
Authors: {'items': [{'id': 'Greenspan-H-K', 'name': {'family': 'Greenspan', 'given': 'Hayit K.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1991
DOI: 10.1109/ISIT.1991.695342
Visual texture is one of the most fundamental properties of a visible surface. It participates as one of the major modalities which help us in the understanding of our visual environment. The different textures in an image are usually very apparent to a human observer, but automatic description of these patterns has proved to be complex.https://authors.library.caltech.edu/records/wg0xr-0ma08Single Phased Burst Error Correcting Array Codes
https://resolver.caltech.edu/CaltechAUTHORS:20170620-150041378
Authors: {'items': [{'id': 'Sayano-Masahiro', 'name': {'family': 'Sayano', 'given': 'Masahiro'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'McEliece-R-J', 'name': {'family': 'McEliece', 'given': 'Robert J.'}}]}
Year: 1991
DOI: 10.1109/ISIT.1991.695251
Array codes composed of row and column parities with a diagonally cyclic readout order are capable of correcting
a single burst error along one diagonal. A new equation which defines permissible array sizes is presented. These codes have an optimal size which is shown to be a number theoretic problem. In addition, correction of approximate errors is presented; this can be generalized for many classes of error correcting codes.https://authors.library.caltech.edu/records/6esqa-h0n55Objective Functions For Neural Network Classifier Design
https://resolver.caltech.edu/CaltechAUTHORS:20170620-163501947
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rod'}}, {'id': 'Miller-J-W', 'name': {'family': 'Miller', 'given': 'John W.'}}, {'id': 'Smyth-P', 'name': {'family': 'Smyth', 'given': 'Padhraic'}}]}
Year: 1991
DOI: 10.1109/ISIT.1991.695143
Backpropagation was originally derived in the context of minimizing a mean-squared error (MSE) objective function. More recently there has been interest in objective functions that provide accurate class probability estimates. In this talk we derive necessary and sufficient conditions on the required form of an objective function to provide probability estimates. This leads to the definition of a general class of functions which includes MSE and cross entropy (CE) as two of the simplest cases. We establish the equivalence of these functions to Maximum Likelihood estimation and the more general principle of Minimum Description Length models. Empirical results are used to demonstrate the tradeoffs associated with the choice of objective functions which minimize to a probability.https://authors.library.caltech.edu/records/q4xgq-96c04Locally Adaptive Vector Quantization For Image Compression
https://resolver.caltech.edu/CaltechAUTHORS:20170622-134209135
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Sayano-Masahiro', 'name': {'family': 'Sayano', 'given': 'Masahiro'}}, {'id': 'Cheung-Kar-Ming', 'name': {'family': 'Cheung', 'given': 'Kar-Ming'}}, {'id': 'Pollara-F', 'name': {'family': 'Pollara', 'given': 'Fabrizio'}}]}
Year: 1991
DOI: 10.1109/ISIT.1991.695306
In this paper we study various improvements to a locally
adaptive vector quantization (LAVQ) algorithm. The
effects of including bit stripping, index compression, and filtering techniques will be discussed. Software implementation and comparisons with non-adaptive vector
quantization algorithms will be studied.https://authors.library.caltech.edu/records/wewad-kt087Incremental Rule-based Learning
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142001533
Authors: {'items': [{'id': 'Higgins-C-M', 'name': {'family': 'Higgins', 'given': 'Charles M.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1991
DOI: 10.1109/isit.1991.695344
In a system which learns to predict the value of an output variable given one or more input variables by looking at a set of examples, a rule-based knowledge representation provides not only a natural method of constructing a classifier, but also a human-readable explanation of what has been learned. Consider a rule of the form if y then x where y is a conjunction of values of input variables and x is a value of the output variable. The number of input variables in y is called the order of the rule. In previous work, a measure of the information content or "value" of such a rule has been developed (the J-measure. It has been shown in [3] that a classifier can be built from the rules obtained by a constrained search of all possible rules which performs comparably with other classifiers.https://authors.library.caltech.edu/records/paqsj-wzc27The Kanerva memory is stable
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000942
Authors: {'items': [{'id': 'Chiueh-Tzi-Dar', 'name': {'family': 'Chiueh', 'given': 'Tzi-Dar'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1991
DOI: 10.1109/ijcnn.1991.155349
The Kanerva memory is a simple yet important model of the cerebellar cortex. Its power has been demonstrated by its huge storage capacity as an associative memory. In the present work, the Kanerva memory is briefly introduced and it is shown to be asymptotically stable in both the parallel update and sequential update modes. Its asymptotic stability is proved by introducing a Lyapunov function and showing that the function follows a descent trajectory as the Kanerva memory evolves.https://authors.library.caltech.edu/records/6devf-g7b97Texture analysis via unsupervised and supervised learning
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000852
Authors: {'items': [{'id': 'Greenspan-H', 'name': {'family': 'Greenspan', 'given': 'H.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'R.'}}, {'id': 'Chellappa-R', 'name': {'family': 'Chellappa', 'given': 'R.'}}]}
Year: 1991
DOI: 10.1109/ijcnn.1991.155254
A framework for texture analysis based on combined unsupervised and supervised learning is proposed. The textured input is represented in the frequency-orientation space via a Gabor-wavelet pyramidal decomposition. In the unsupervised learning phase a neural network vector quantization scheme is used for the quantization of the feature-vector attributes and a projection onto a reduced dimension clustered map for initial segmentation. A supervised stage follows, in which labeling of the textured map is achieved using a rule-based system. A set of informative features are extracted in the supervised stage as congruency rules between attributes using an information-theoretic measure. This learned set can now act as a classification set for test images. This approach is suggested as a general framework for pattern classification. Simulation results for the texture classification are given.https://authors.library.caltech.edu/records/e63xe-zze35Objective functions for probability estimation
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000675
Authors: {'items': [{'id': 'Miller-J-W', 'name': {'family': 'Miller', 'given': 'John W.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rod'}}, {'id': 'Smyth-P', 'name': {'family': 'Smyth', 'given': 'Padhraic'}}]}
Year: 1991
DOI: 10.1109/ijcnn.1991.155295
Backpropagation was originally derived in the context of minimizing a mean-squared error (MSE) objective function. More recently there has been interest in objective functions that provide accurate class probability estimates. In this paper we derive necessary and sufficient conditions on the required form of an objective function to provide probability estimates. This leads to the definition of a general class of functions which includes MSE and cross cutropy (CE) as two of the simplest cases.https://authors.library.caltech.edu/records/85qkn-3ga06Incremental learning with rule-based neural networks
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000764
Authors: {'items': [{'id': 'Higgins-C-M', 'name': {'family': 'Higgins', 'given': 'C. M.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'R. M.'}}]}
Year: 1991
DOI: 10.1109/ijcnn.1991.155294
A classifier for discrete-valued variable classification problems is presented. The system utilizes an information-theoretic algorithm for constructing informative rules from example data. These rules are then used to construct a neural network to perform parallel inference and posterior probability estimation. The network can be grown incrementally, so that new data can be incorporated without repeating the training on previous data. It is shown that this technique performs as well as other techniques such as backpropagation while having unique advantages in incremental learning capability, training efficiency, knowledge representation, and hardware implementation suitability.https://authors.library.caltech.edu/records/11yy9-8df91Combined Neural Network and Rule-Based Framework for Probabilistic Pattern Recognition and Discovery
https://resolver.caltech.edu/CaltechAUTHORS:20160127-130609305
Authors: {'items': [{'id': 'Greenspan-H-K', 'name': {'family': 'Greenspan', 'given': 'Hayit K.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}, {'id': 'Chellappa-R', 'name': {'family': 'Chellappa', 'given': 'Rama'}}]}
Year: 1992
A combined neural network and rule-based approach is suggested as a general framework for pattern recognition. This approach enables unsupervised and supervised learning, respectively, while providing probability estimates for the output classes. The probability maps are utilized for
higher level analysis such as a feedback for smoothing over the output label maps and the identification of unknown patterns (pattern "discovery"). The suggested approach is presented and demonstrated in the texture - analysis task. A correct classification rate in the 90 percentile is achieved for both unstructured and structured natural texture mosaics. The advantages of the probabilistic approach to pattern analysis are demonstrated.https://authors.library.caltech.edu/records/q8zh9-a3c39Learning fuzzy rule-based neural networks for function approximation
https://resolver.caltech.edu/CaltechAUTHORS:20190314-155127145
Authors: {'items': [{'id': 'Higgins-C-M', 'name': {'family': 'Higgins', 'given': 'C. M.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'R. M.'}}]}
Year: 1992
DOI: 10.1109/ijcnn.1992.287127
In this paper, we present a method for the induction of fuzzy logic rules to predict a numerical function from samples of the function and its dependent variables. This method uses an information-theoretic approach based on our previous work with discrete-valued data [3]. The rules learned can then be used in a neural network to predict the function value based upon its dependent variables. An example is shown of learning a control system function.https://authors.library.caltech.edu/records/4kqn3-31f69A digital neural network architecture using random pulse trains
https://resolver.caltech.edu/CaltechAUTHORS:20190314-155126951
Authors: {'items': [{'id': 'Erten-G', 'name': {'family': 'Erten', 'given': 'Gamze'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1992
DOI: 10.1109/ijcnn.1992.287137
A digital neural network architecture generating and processing random pulse trains, along with its unique advantages over existing comparable systems is described. In addition, test results from the VLSI implementation of its multiplication scheme are presented. These indicate that the implementation performs robustly and accurately.https://authors.library.caltech.edu/records/3dx3p-ttd67Network Operators Advice and Assistance (NOAA): a real-time traffic rerouting expert system
https://resolver.caltech.edu/CaltechAUTHORS:20190314-155127399
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Ambrose-B', 'name': {'family': 'Ambrose', 'given': 'Barry'}}, {'id': 'Latin-H', 'name': {'family': 'Latin', 'given': 'Hayes'}}, {'id': 'Finnell-S', 'name': {'family': 'Finnell', 'given': 'Sandee'}}]}
Year: 1992
DOI: 10.1109/glocom.1992.276590
A real-time autonomous expert system has been developed to carry out traffic management in the Southern Californian telephone network. The system has been working on live data since September 1991 and generates rerouting advice that agrees with that generated by the present network management procedures. A modular software design was adopted to allow for evolution. A graphics interface allows the user to easily navigate through the display of exception conditions and advice. Exceptions are shown highlighted on a map of Southern California. A severity measure is calculated for each exception and is used to prioritize the display of information.https://authors.library.caltech.edu/records/snkk4-r3m28Remote Sensing Image Analysis via a Texture Classification Neural Network
https://resolver.caltech.edu/CaltechAUTHORS:20160202-165824779
Authors: {'items': [{'id': 'Greenspan-H-K', 'name': {'family': 'Greenspan', 'given': 'Hayit K.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}]}
Year: 1993
In this work we apply a texture classification network to remote sensing image analysis. The goal is to extract the characteristics of the area depicted in the input image, thus achieving a segmented map of the region. We have
recently proposed a combined neural network and rule-based framework for texture recognition. The framework uses unsupervised and supervised learning, and provides probability estimates for the output classes. We
describe the texture classification network and extend it to demonstrate its application to the Landsat and Aerial image analysis domain.https://authors.library.caltech.edu/records/wmrd8-0zx07Probability Estimation from a Database Using a Gibbs Energy Model
https://resolver.caltech.edu/CaltechAUTHORS:20160127-132508165
Authors: {'items': [{'id': 'Miller-J-W', 'name': {'family': 'Miller', 'given': 'John W.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1993
We present an algorithm for creating a neural network which produces accurate probability estimates as outputs. The network implements a Gibbs probability distribution model of the training database. This model is created by a new transformation relating the joint probabilities of attributes in the database to the weights (Gibbs potentials) of the distributed network model. The theory
of this transformation is presented together with experimental results. One advantage of this approach is the network weights are prescribed without iterative gradient descent. Used as a classifier the network tied or outperformed published results on a variety of
databases.https://authors.library.caltech.edu/records/7j7vn-q6755Learning Fuzzy Rule-Based Neural Networks for Control
https://resolver.caltech.edu/CaltechAUTHORS:20160203-163952250
Authors: {'items': [{'id': 'Higgins-C-M', 'name': {'family': 'Higgins', 'given': 'Charles M.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1993
A three-step method for function approximation with a fuzzy system is proposed. First, the membership functions and an initial rule representation are learned; second, the rules are compressed as much as possible using information theory; and finally, a computational network is constructed to compute the function value. This system is applied to two control examples: learning the truck and trailer backer-upper control system, and learning a cruise control
system for a radio-controlled model car.https://authors.library.caltech.edu/records/4rfnf-ejn40Self-clustering recurrent networks
https://resolver.caltech.edu/CaltechAUTHORS:20190314-155127316
Authors: {'items': [{'id': 'Zeng-Zheng', 'name': {'family': 'Zeng', 'given': 'Zheng'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Smyth-P', 'name': {'family': 'Smyth', 'given': 'Padhraic'}}]}
Year: 1993
DOI: 10.1109/icnn.1993.298535
Recurrent neural networks have recently been shown to have the ability to learn finite state automata (FSA's) from examples. In this paper it is shown, based on empirical analyses, that second-order networks which are trained to learn FSA's tend to form discrete clusters as the state representation in the hidden unit activation space. This observation is used to define 'self-clustering' networks which automatically extract discrete state machines from the learned network. However, the problem of instability on long test strings is a factor in the generalization performance of recurrent networks - in essence, because of the analog nature of the state representation, the network gradually "forgets" where the individual state regions are. To address this problem a new network structure is introduced whereby the network uses quantization in the feedback path to force the learning of discrete states. Experimental results show that the new method learns FSA's just as well as existing methods in the literature but with the significant advantage of being stable on test strings of arbitrary length.https://authors.library.caltech.edu/records/2r5dm-sjr89Overcomplete steerable pyramid filters and rotation invariance
https://resolver.caltech.edu/CaltechAUTHORS:20120222-131648332
Authors: {'items': [{'id': 'Greenspan-H', 'name': {'family': 'Greenspan', 'given': 'H.'}}, {'id': 'Belongie-S', 'name': {'family': 'Belongie', 'given': 'S.'}, 'orcid': '0000-0002-0388-5217'}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'R.'}}, {'id': 'Perona-P', 'name': {'family': 'Perona', 'given': 'P.'}, 'orcid': '0000-0002-7583-5809'}, {'id': 'Rakshit-S', 'name': {'family': 'Rakshit', 'given': 'S.'}}, {'id': 'Anderson-C-H', 'name': {'family': 'Anderson', 'given': 'C. H.'}, 'orcid': '0000-0003-1141-3559'}]}
Year: 1994
DOI: 10.1109/CVPR.1994.323833
A given (overcomplete) discrete oriented pyramid may be converted into a steerable pyramid by interpolation. We present a technique for deriving the optimal interpolation functions (otherwise called 'steering coefficients'). The proposed scheme is demonstrated on a computationally efficient oriented pyramid, which is a variation on the Burt and Adelson (1983) pyramid. We apply the generated steerable pyramid to orientation-invariant texture analysis in order to demonstrate its excellent rotational isotropy. High classification rates and precise rotation identification are demonstrated.https://authors.library.caltech.edu/records/g3eg0-2qd76A statistical analysis of neural computation
https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359458
Authors: {'items': [{'id': 'Cortese-J-A', 'name': {'family': 'Cortese', 'given': 'John A.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 1994
DOI: 10.1109/isit.1994.394753
This paper presents an architecture and learning algorithm for a feedforward neural network implementing a two pattern (image) classifier. By considering the input pixels to be random variables, a statistical binary hypothesis (likelihood ratio) test is implemented. A linear threshold separates p[X|H_0] and p[X|H_1], minimizing a risk function. In this manner, a single neuron is considered as a BSC with the pdf error tails probability ε. A Single layer of neurons is viewed as a parallel bank of independent BSC's, which is equivalent to a single effective BSC representing that layer's hypothesis testing performance. A multiple layer network is viewed as a cascade of BSC channels, and which again collapses into a single effective BSC.https://authors.library.caltech.edu/records/c7td0-dct73A learning algorithm for multi-layer perceptrons with hard-limiting threshold units
https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359612
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Zeng-Zheng', 'name': {'family': 'Zeng', 'given': 'Zheng'}}]}
Year: 1994
DOI: 10.1109/icnn.1994.374161
We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-limiting units. The learning scheme is based on the standard backpropagation, but with "pseudo-gradient" descent, which uses the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function. A justification that the pseudo-gradient always points in the right down hill direction in error surface for networks with one hidden layer is provided. The advantages of such networks are that their internal representations in the hidden layers are clearly interpretable, and well-defined classification rules can be easily obtained, that calculations for classifications after training are very simple, and that they are easily implementable in hardware. Comparative experimental results on several benchmark problems using both the conventional backpropagation networks and our learning scheme for multilayer perceptrons are presented and analyzed.https://authors.library.caltech.edu/records/ksn5m-qvr57A learning algorithm for multi-layer perceptrons with hard-limiting threshold units
https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359536
Authors: {'items': [{'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Zeng-Zheng', 'name': {'family': 'Zeng', 'given': 'Zheng'}}]}
Year: 1994
DOI: 10.1109/nnsp.1994.366045
We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-limiting units. The learning scheme is based on the standard backpropagation, but with "pseudo-gradient" descent, which uses the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function. A justification that the pseudo-gradient always points in the right down hill direction in error surface for networks with one hidden layer is provided. The advantages of such networks are that their internal representations in the hidden layers are clearly interpretable, and well-defined classification rules can be easily obtained, that calculations for classifications after training are very simple, and that they are easily implementable in hardware. Comparative experimental results on several benchmark problems using both the conventional backpropagation networks and our learning scheme for multilayer perceptrons are presented and analyzed.https://authors.library.caltech.edu/records/8gswr-6qv48Rotation Invariant Texture Recognition Using a Steerable Pyramid
https://resolver.caltech.edu/CaltechAUTHORS:20120306-154819133
Authors: {'items': [{'id': 'Greenspan-H', 'name': {'family': 'Greenspan', 'given': 'H.'}}, {'id': 'Belongie-S', 'name': {'family': 'Belongie', 'given': 'S.'}, 'orcid': '0000-0002-0388-5217'}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'R.'}}, {'id': 'Perona-P', 'name': {'family': 'Perona', 'given': 'P.'}, 'orcid': '0000-0002-7583-5809'}]}
Year: 1994
DOI: 10.1109/ICPR.1994.576896
A rotation-invariant texture recognition system is presented. A steerable oriented pyramid is used to extract representative features for the input textures. The steerability of the filter set allows a shift to an invariant representation via a DFT-encoding step. Supervised classification follows. State-of-the-art recognition results are presented on a 30 texture database with a comparison across the performance of the K-nn, back-propagation and rule-based classifiers. In addition, high accuracy estimation of the input rotation angle is demonstratedhttps://authors.library.caltech.edu/records/gxgyx-x8c68Analog VLSI system for active drag reduction
https://resolver.caltech.edu/CaltechAUTHORS:20190429-151824427
Authors: {'items': [{'id': 'Gupta-B', 'name': {'family': 'Gupta', 'given': 'Bhusan'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}, {'id': 'Jiang-Fukang', 'name': {'family': 'Jiang', 'given': 'Fukang'}}, {'id': 'Tai-Yu-Chong', 'name': {'family': 'Tai', 'given': 'Yu-Chong'}, 'orcid': '0000-0001-8529-106X'}, {'id': 'Tung-Steve', 'name': {'family': 'Tung', 'given': 'Steve'}}, {'id': 'Ho-Chih-Ming', 'name': {'family': 'Ho', 'given': 'Chih-Ming'}}]}
Year: 1996
DOI: 10.1109/mnnfs.1996.493771
We describe an analog CMOS VLSI system that can process real-time signals from surface-mounted shear stress sensors to detect regions of high shear stress along a surface in an airflow. The outputs of the CMOS circuit are used to actuate micromachined flaps with the goal of reducing this high shear stress on the surface and thereby lowering the total drag. We have designed, fabricated, and tested parts of this system in a wind tunnel in laminar and turbulent flow regimes.https://authors.library.caltech.edu/records/mxes6-b7435A surface-micromachined shear stress imager
https://resolver.caltech.edu/CaltechAUTHORS:20190429-151824343
Authors: {'items': [{'id': 'Jiang-Fukang', 'name': {'family': 'Jiang', 'given': 'Fukang'}}, {'id': 'Tai-Yu-Chong', 'name': {'family': 'Tai', 'given': 'Yu-Chong'}, 'orcid': '0000-0001-8529-106X'}, {'id': 'Gupta-B', 'name': {'family': 'Gupta', 'given': 'Bhusan'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}, {'id': 'Tung-Steve', 'name': {'family': 'Tung', 'given': 'Steve'}}, {'id': 'Huang-Jin-Biao', 'name': {'family': 'Huang', 'given': 'Jin-Biao'}}, {'id': 'Ho-Chih-Ming', 'name': {'family': 'Ho', 'given': 'Chih-Ming'}}]}
Year: 1996
DOI: 10.1109/memsys.1996.493838
A new MEMS shear stress sensor imager has been developed and its capability of imaging surface shear stress distribution has been demonstrated. The imager consists of multi-rows of vacuum-insulated shear stress sensors with a 300 /spl mu/m pitch. This small spacing allows it to detect surface flow patterns that could not be directly measured before. The high frequency response (30 kHz) of the sensor under constant temperature bias mode also allows it to be used in high Reynolds number turbulent flow studies. The measurement results in a fully developed turbulent flow agree well with the numerical and experimental results previously published.https://authors.library.caltech.edu/records/1bemn-sc809Active drag reduction using neural networks
https://resolver.caltech.edu/CaltechAUTHORS:20190320-143014388
Authors: {'items': [{'id': 'Babcock-D', 'name': {'family': 'Babcock', 'given': 'David'}}, {'id': 'Lee-Changhoon', 'name': {'family': 'Lee', 'given': 'Changhoon'}}, {'id': 'Gupta-B', 'name': {'family': 'Gupta', 'given': 'Bhusan'}}, {'id': 'Kim-John', 'name': {'family': 'Kim', 'given': 'John'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}]}
Year: 1996
DOI: 10.1109/NICRSP.1996.542770
This paper presents the application of a neural network controller to the problem of active drag reduction in a fully turbulent 3D fluid flow regime. The neural network learns a function nearly identical to an analytically derived control law. We then demonstrate the ability of a neural controller to maintain a drag-reduced flow in a fully turbulent fluid simulation. Finally we examine the amount of parameter variation that may be required for a physical implementation of such a neural controller.https://authors.library.caltech.edu/records/3qrh3-0zr13A wafer-scale MEMS and analog VLSI system for active drag reduction
https://resolver.caltech.edu/CaltechAUTHORS:20190320-151208784
Authors: {'items': [{'id': 'Gupta-B', 'name': {'family': 'Gupta', 'given': 'Bhusan'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}, {'id': 'Fukang-Jiang', 'name': {'family': 'Fukang', 'given': 'Jiang'}}, {'id': 'Tsao-Tom', 'name': {'family': 'Tsao', 'given': 'Tom'}}, {'id': 'Tai-Yu-Chong', 'name': {'family': 'Tai', 'given': 'Yu-Chong'}, 'orcid': '0000-0001-8529-106X'}, {'id': 'Tung-Steve', 'name': {'family': 'Tung', 'given': 'Steve'}}, {'id': 'Ho-Chih-Ming', 'name': {'family': 'Ho', 'given': 'Chih-Ming'}}]}
Year: 1996
DOI: 10.1109/iciss.1996.552410
We describe an analog CMOS VLSI system that can process real-time signals from integrated shear stress sensors to detect regions of high shear stress along a surface in an airflow. The outputs of the CMOS circuit control the actuation of integrated micromachined flaps with the goal of reducing this high shear stress on the surface and thereby lowering the total drag. We have designed, fabricated, and tested components of this system in a wind tunnel in both laminar and turbulent flow regimes with the goal of building a wafer-scale system.https://authors.library.caltech.edu/records/d6qpm-r8270Keyword spotting for cursive document retrieval
https://resolver.caltech.edu/CaltechAUTHORS:20190429-151824702
Authors: {'items': [{'id': 'Keaton-P', 'name': {'family': 'Keaton', 'given': 'Patricia'}}, {'id': 'Greenspan-H-K', 'name': {'family': 'Greenspan', 'given': 'Hayit'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}]}
Year: 1997
DOI: 10.1109/dia.1997.627095
We present one of the first attempts towards automatic retrieval of documents, in the noisy environment of unconstrained, multiple author handwritten forms. The documents were written in cursive script for which conventional OCR and text retrieval engines are not adequate. We focus on a visual word spotting indexing scheme for scanned documents housed in the Archives of the Indies in Seville, Spain. The framework presented utilizes pattern recognition, learning and information fusion methods, and is motivated from human word-spotting studies. The proposed system is described and initial results are presented.https://authors.library.caltech.edu/records/cr110-jxq97An integrated MEMS system for turbulent boundary layer control
https://resolver.caltech.edu/CaltechAUTHORS:20190409-143106813
Authors: {'items': [{'id': 'Tsao-Thomas', 'name': {'family': 'Tsao', 'given': 'Thomas'}}, {'id': 'Jiang-Fukang', 'name': {'family': 'Jiang', 'given': 'Fukang'}}, {'id': 'Miller-R-A', 'name': {'family': 'Miller', 'given': 'Raanan'}}, {'id': 'Tai-Yu-Chong', 'name': {'family': 'Tai', 'given': 'Yu-Chong'}, 'orcid': '0000-0001-8529-106X'}, {'id': 'Gupta-B', 'name': {'family': 'Gupta', 'given': 'Bhusan'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}, {'id': 'Tung-Steve', 'name': {'family': 'Tung', 'given': 'Steve'}}, {'id': 'Ho-Chih-Ming', 'name': {'family': 'Ho', 'given': 'Chih-Ming'}}]}
Year: 1997
DOI: 10.1109/sensor.1997.613647
The goal of this project is a first attempt to achieve active drag reduction using a large-scale integrated MEMS system. Previously, we have reported the successful development of a shear-stress imager which allows us to "see" surface vortices (1996). Here we present the promising results of the interaction between micro flap actuators and vortices. It is found that microactuators can actually reduce drag to values even lower than the drag associated with pure laminar flow, and that the microactuators can reduce shear stress values in turbulent flow as well. Based on these results, we have attempted the first totally integrated system that consists of 18 shear stress sensors, 3 magnetic flap-type actuators and control electronics for use in turbulent boundary layer control studies.https://authors.library.caltech.edu/records/x0sgg-h2m36Neural networks for active drag reduction in fully turbulent airflows
https://resolver.caltech.edu/CaltechAUTHORS:20190320-124453146
Authors: {'items': [{'id': 'Babcock-D', 'name': {'family': 'Babcock', 'given': 'David'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}, {'id': 'Lee-Changhoon', 'name': {'family': 'Lee', 'given': 'Changhoon'}}, {'id': 'Kim-John', 'name': {'family': 'Kim', 'given': 'John'}}]}
Year: 1997
DOI: 10.1049/cp:19970725
This paper presents the application of a neural network controller to the problem of active drag reduction in a fully turbulent 3D fluid flow regime. Based on a successful yet infeasible previous active control scheme, we trained a neural network to mimic the control law using only surface spanwise shear stress measurements. We then demonstrate the ability of a neural controller implemented in an adaptive inverse model scheme to maintain a drag-reduced flow in a fully turbulent fluid simulation. By observing the weights of the on-line controller, a simple control law that predicts actuations proportional to the spanwise derivative of the spanwise shear stress is derived. Finally we examine the amount of parameter variation that may be required for a physical implementation of linear and nonlinear neural controllers.https://authors.library.caltech.edu/records/gwc36-pj955Bach in a Box - Real-Time Harmony
https://resolver.caltech.edu/CaltechAUTHORS:20160226-162823587
Authors: {'items': [{'id': 'Spangler-R-R', 'name': {'family': 'Spangler', 'given': 'Randall R.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}, {'id': 'Hawkins-J', 'name': {'family': 'Hawkins', 'given': 'Jim'}}]}
Year: 1998
We describe a system for learning J. S. Bach's rules of musical harmony. These rules are learned from examples and are expressed as rule-based neural networks. The rules are then applied in real-time to generate new accompanying harmony for a live performer. Real-time functionality imposes constraints on the learning and harmonizing processes, including limitations on the types of information
the system can use as input and the amount of processing
the system can perform. We demonstrate algorithms for generating and refining musical rules from examples which meet these constraints. We describe a method for including a priori knowledge into the rules which yields significant performance gains. We then describe techniques for applying these rules to generate new music in real-time. We conclude the paper with an analysis of experimental results.https://authors.library.caltech.edu/records/7hv87-tdz85Integrated chemical sensors based on carbon black and polymer films using a standard CMOS process and post-processing
https://resolver.caltech.edu/CaltechAUTHORS:20190409-134834363
Authors: {'items': [{'id': 'Dickson-J-A', 'name': {'family': 'Dickson', 'given': 'Jeffrey A.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 2000
DOI: 10.1109/ISCAS.2000.858758
We present an integrated chemical sensor array fabricated using a CMOS process followed by post-processing. The sensor presented in this paper incorporates 324 individually addressable sensing nodes. Post processing involves an electroless nickel and gold plating step to fabricate sensing contacts, and the deposition of a carbon black based polymer sensor material. The operation of the integrated sensor is confirmed. This sensor technology will allow the creation of large arrays of chemically diverse sensors.https://authors.library.caltech.edu/records/qbsxn-dmn30Dynamic charge restoration of floating gate subthreshold MOS translinear circuits
https://resolver.caltech.edu/CaltechAUTHORS:20190326-112619359
Authors: {'items': [{'id': 'Koosh-V-F', 'name': {'family': 'Koosh', 'given': 'Vincent F.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}]}
Year: 2001
DOI: 10.1109/ARVLSI.2001.915558
We extend a class of analog CMOS circuits that can be used to perform many analog computational tasks. The circuits utilize MOSFETs in their subthreshold region as well as capacitors and switches to produce the computations. We show a few basic current mode building blocks that perform squaring, square root, and multiplication/division which should be sufficient to gain understanding of how to implement other power law circuits. We then combine the circuit building blocks into a more complicated circuit that normalizes a current by the square root of the sum of the squares (vector sum) of the currents. Each of these circuits have switches at the inputs of their floating gates which are used to dynamically set and restore the charges at the floating gates to proceed with the computation.https://authors.library.caltech.edu/records/9y9fx-63c98VLSI neural network with digital weights and analog multipliers
https://resolver.caltech.edu/CaltechAUTHORS:20190429-151824627
Authors: {'items': [{'id': 'Koosh-V-F', 'name': {'family': 'Koosh', 'given': 'Vincent F.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney'}}]}
Year: 2001
DOI: 10.1109/iscas.2001.921290
A VLSI feedforward neural network is presented that makes use of digital weights and analog multipliers. The network is trained in a chip-in-loop fashion with a host computer implementing the training algorithm. The chip uses a serial digital weight bus implemented by a long shift register to input the weights. The inputs and outputs of the network are provided directly at pins on the chip. The training algorithm used is a parallel weight perturbation technique. Training results are shown for a 2 input, 1 output network trained with an AND function, and for a 2 input, 2 hidden unit, I output network trained with an XOR function.https://authors.library.caltech.edu/records/32pm5-fb726Dynamic charge restoration of floating gate subthreshold MOS translinear circuits
https://resolver.caltech.edu/CaltechAUTHORS:20190405-154249828
Authors: {'items': [{'id': 'Koosh-V-F', 'name': {'family': 'Koosh', 'given': 'Vincent F.'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 2001
DOI: 10.1109/ISCAS.2001.921781
We extend a class of analog CMOS circuits that can be used to perform many analog computational tasks. The circuits utilize MOSFET's in their subthreshold region as well as capacitors and switches to produce the computations. We show a few basic current-mode building blocks that perform squaring, square root, and multiplication/division which should be sufficient to gain an understanding of how to implement other power law circuits. We then combine the circuit building blocks into a more complicated circuit that normalizes a current by the square root of the sum of the squares (vector sum) of the currents. Each of these circuits have switches at the inputs of their floating gates which are used to dynamically set and restore the charges at the floating gates to proceed with the computation.https://authors.library.caltech.edu/records/mxctz-t3t19Swarm robotic odor localization
https://resolver.caltech.edu/CaltechAUTHORS:20190320-143921262
Authors: {'items': [{'id': 'Hayes-A-T', 'name': {'family': 'Hayes', 'given': 'Adam T.'}}, {'id': 'Martinoli-A', 'name': {'family': 'Martinoli', 'given': 'Alcherio'}}, {'id': 'Goodman-R-M', 'name': {'family': 'Goodman', 'given': 'Rodney M.'}}]}
Year: 2001
DOI: 10.1109/IROS.2001.976311
This paper presents an investigation of odor localization by groups of autonomous mobile robots using principles of swarm intelligence. We describe a distributed algorithm by which groups of agents can solve the full odor localization task more efficiently than a single agent. We then demonstrate that a group of real robots under fully distributed control can successfully traverse a real odor plume. Finally, we show that an embodied simulator can faithfully reproduce the real robots experiments and thus can be a useful tool for off-line study and optimization of odor localization in the real world.https://authors.library.caltech.edu/records/ad3ys-9b559