CaltechAUTHORS: Article
https://feeds.library.caltech.edu/people/Goodman-R-M/article.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 23 Apr 2024 19:28:51 -0700Binary codes with disjoint codebooks and mutual Hamming distance
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130608712
Year: 1974
DOI: 10.1049/el:19740308
Equal-length linear binary block error-control codes with disjoint codebooks and mutual Hamming distance are considered. A method of constructing pairs of these disjoint codes from known cyclic codes, and determining their mutual distance, is described. Some sets of length-15 cyclic codes are tabulated.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130608712Data transmission with variable-redundancy error control over a high-frequency channel
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130608806
Year: 1975
DOI: 10.1049/piee.1975.0022
Results of computations and field tests on a binary-data-transmission system, operating at 1 kbaud over an h.f. channel, are presented. Error correction is effected by means of error detection and automatic request for repeat, via a feedback channel (a Post Office private line). A set of short, fixed-block-length cyclic codes is available, a code of appropriate redundancy being automatically selected to match the varying channel conditions. The decision about which code to use is made at the receiver, and the transmitter is informed via the feedback channel. The results show that relatively simple, reliable, and efficient data communication can be realised by this means.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130608806An efficient minimum-distance decoding algorithm for convolutional error-correcting codes
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130608899
Year: 1978
DOI: 10.1049/piee.1978.0027
Minimum-distance decoding of convolutional codes has generally been considered impractical for other than relatively short constraint length codes, because of the exponential growth in complexity with increasing constraint length. The minimum-distance decoding algorithm proposed in the paper, however, uses a sequential decoding approach to avoid an exponential growth in complexity with increasing constraint length, and also utilises the distance and structural properties of convolutional codes to considerably reduce the amount of tree searching needed to find the minimum-distance path. In this way the algorithm achieves a complexity that does not grow exponentially with increasing constraint length, and is efficient for both long and short constraint length codes. The algorithm consists of two main processes. Firstly, a direct-mapping scheme, which automatically finds the minimum-distance path in a single mapping operation, is used to eliminate the need for all short back-up tree searches. Secondly, when a longer back-up search is required, an efficient tree-searching scheme is used to minimise the required search effort. The paper describes the complete algorithm and its theoretical basis, and examples of its operation are given.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130608899Analysis of the computational and storage requirements for the minimum-distance decoding of convolutional codes
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130608986
Year: 1979
DOI: 10.1049/piee.1979.0004
In this paper we present the analytical results of the computational requirement for the minimum-distance decoding of convolutional codes. By deriving upper bounds for the number of decoding operations required to advance one code segment, we show that many less operations are required than in the case of sequential decoding This implies a significant reduction in the severity of the buffer-overflow problem. Then, we propose several modifications which could further reduce the computational effort required at long back-up distance. Finally we investigate the trade-off between coding-parameters selection and storage requirement as an aid to quantitative decoder design. Examples and future aspects are also presented and discussed.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130608986Soft-decision error control for H.F. data transmission
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609070
Year: 1980
DOI: 10.1049/ip-f-1.1980.0055
This paper is concerned with the soft-decision decoding of error-correcting codes in the context of h.f. data transmission. The use of soft-decision information from the data modem results in an improvement in the performance of a forward-error-correction scheme, when compared with hard-decision decoding, without any further redundancy penalty. In the paper, we estimate the theoretical improvements that can be expected from soft-decision decoding of block and convolutional codes, in terms of both random and burst error-correcting power. Also, this is related to expected coding gains for the Gaussian and Rayleigh fading channels. In addition, we investigate the performance of soft-decision decoding on two types of h.f. modem: a multisubcarrier parallel transmission format (p.t.f.) type modem, and an experimental serial transmission s.m.i.d.d. type modem. We conclude that significant improvements in the performance of coded systems are obtainable through the use of soft-decision decoding.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609070Soft-decision minimum-distance sequential decoding algorithm for convolutional codes
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609155
Year: 1981
DOI: 10.1049/ip-f-1.1981.0029
The maximum-likelihood decoding of convolutional codes has generally been considered impractical for other than relatively short constraint length codes, because of the exponential growth in complexity with increasing constraint length. The soft-decision minimum-distance decoding algorithm proposed in the paper approaches the performance of a maximum-likelihood decoder, and uses a sequential decoding approach to avoid an exponential growth in complexity. The algorithm also utilises the distance and structural properties of convolutional codes to considerably reduce the amount of searching needed to find the minimum soft-decision distance paths when a back-up search is required. This is done in two main ways. First, a small set of paths called permissible paths are utilised to search the whole of the subtree for the better path, instead of using all the paths within a given subtree. Secondly, the decoder identifies which subset of permissible paths should be utilised in a given search and which may be ignored. In this way many unnecessary path searches are completely eliminated. Because the decoding effort required by the algorithm is low, and the decoding processes are simple, the algorithm opens the possibility of building high-speed long constraint length convolutional decoders whose performance approaches that of the optimum maximum-likelihood decoder. The paper describes the algorithm and its theoretical basis, and gives examples of its operation. Also, results obtained from practical implementations of the algorithm using a high-speed microcomputer are presented.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609155Lifetime analyses of error-control coded semiconductor RAM systems
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609254
Year: 1982
DOI: 10.1049/ip-e.1982.0017
The paper is concerned with developing quantitative results on the lifetime of coded random-access semiconductor memory systems. Although individual RAM chips are highly reliable, when large numbers of chips are combined to form a large memory system, the reliability may not be sufficiently high for the given application. In this case, error-correction coding is used to improve the reliability and hence the lifetime of the system. Formulas are developed which will enable the system designer to calculate the improvement in lifetime (over an uncoded system) for any particular coding scheme and size of memory. This will enable the designer to see if a particular memory system gives the required reliability, in terms of hours of lifetime, for the particular application. In addition, the designer will be able to calculate the percentage of identical systems that will, on average, last a given length of time.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609254New trapdoor-knapsack public-key cryptosystem
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609335
Year: 1985
DOI: 10.1049/ip-e.1985.0040
The paper presents a new trapdoor-knapsack public-key cryptosystem. The encryption equation is based on the general modular knapsack equation, but, unlike the Merkle-Hellman scheme, the knapsack components do not have to have a superincreasing structure. The trapdoor is based on transformations between the modular and radix form of the knapsack components, via the Chinese remainder theorem. The security is based on factoring a number composed of 256 bit prime factors. The resulting cryptosystem has high density, approximately 30% message expansion and a public key of 14 Kbits. This compares very favourably with the Merkle-Hellman scheme which has over 100% expansion and a public key of 80 Kbits. The major advantage of the scheme when compared with the RSA scheme is one of speed. Typically, knapsack schemes such as the one proposed here are capable of throughput speeds which are orders of magnitude faster than the RSA scheme.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609335The reliability of single-error protected computer memories
https://resolver.caltech.edu/CaltechAUTHORS:BLAieeetc88
Year: 1988
DOI: 10.1109/12.75143
The lifetimes of computer memories which are protected with single-error-correcting-double-error-detecting (SEC-DED) codes are studies. The authors assume that there are five possible types of memory chip failure (single-cell, row, column, row-column and whole chip), and, after making a simplifying assumption (the Poisson assumption), have substantiated that experimentally. A simple closed-form expression is derived for the system reliability function. Using this formula and chip reliability data taken from published tables, it is possible to compute the mean time to failure for realistic memory systems.https://resolver.caltech.edu/CaltechAUTHORS:BLAieeetc88Decision tree design from a communication theory standpoint
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609598
Year: 1988
DOI: 10.1109/18.21221
A communication theory approach to decision tree design based on a top-town mutual information algorithm is presented. It is shown that this algorithm is equivalent to a form of Shannon-Fano prefix coding, and several fundamental bounds relating decision-tree parameters are derived. The bounds are used in conjunction with a rate-distortion interpretation of tree design to explain several phenomena previously observed in practical decision-tree design. A termination rule for the algorithm called the delta-entropy rule is proposed that improves its robustness in the presence of noise. Simulation results are presented, showing that the tree classifiers derived by the algorithm compare favourably to the single nearest neighbour classifier.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609598Linear sum codes for random access memories
https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609423
Year: 1988
DOI: 10.1109/12.2254
Linear sum codes (LSCs) form a class of error control codes designed to provide on-chip error correction to semiconductor random access memories (RAMs). They use the natural addressing scheme found on RAMs to form and access codewords with a minimum of overhead. The authors formally define linear sum codes and examine some of their characteristics. Specifically, they examine their minimum distance characteristics, their error correcting capabilities, and the complexity involved in their implementation. In addition, detailed consideration is given to an easily implemented class of single-, double-, and triple-error correcting LSCs.https://resolver.caltech.edu/CaltechAUTHORS:20190314-130609423The complexity of information set decoding
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000507
Year: 1990
DOI: 10.1109/18.57202
Information set decoding is an algorithm for decoding any linear code. Expressions for the complexity of the procedure that are logarithmically exact for virtually all codes are presented. The expressions cover the cases of complete minimum distance decoding and bounded hard-decision decoding, as well as the important case of bounded soft-decision decoding. It is demonstrated that these results are vastly better than those for the trivial algorithms of searching through all codewords or through all syndromes, and are significantly better than those for any other general algorithm currently known. For codes over large symbol fields, the procedure tends towards a complexity that is subexponential in the symbol size.https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000507A static RAM chip with on-chip error correction
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000592
Year: 1990
DOI: 10.1109/4.62154
This paper describes a 2-kb CMOS static RAM with on-chip error-correction capability (ECCRAM chip). The chip employs the linear sum code (LSC) technique to perform error detection and correction. The ECCRAM chip has been fabricated in a double-metal scalable CMOS process with 3-µm feature size. Testing results of the actual chip shows a significant improvement in random error tolerance.https://resolver.caltech.edu/CaltechAUTHORS:20190314-142000592A real-time neural system for color constancy
https://resolver.caltech.edu/CaltechAUTHORS:20170408-172537077
Year: 1991
DOI: 10.1109/72.80334
A neural network approach to the problem of color constancy is presented. Various algorithms based on Land's retinex theory are discussed with respect to neurobiological parallels, computational efficiency, and suitability for VLSI implementation. The efficiency of one algorithm is improved by the application of resistive grids and is tested in computer simulations; the simulations make clear the strengths and weaknesses of the algorithm. A novel extension to the algorithm is developed to address its weaknesses. An electronic system that is based on the original algorithm and that operates at video rates was built using subthreshold analog CMOS VLSI resistive grids. The system displays color constancy abilities and qualitatively mimics aspects of human color perception.https://resolver.caltech.edu/CaltechAUTHORS:20170408-172537077Recurrent correlation associative memories
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142001270
Year: 1991
DOI: 10.1109/72.80338
A model for a class of high-capacity associative memories is presented. Since they are based on two-layer recurrent neural networks and their operations depend on the correlation measure, these associative memories are called recurrent correlation associative memories (RCAMs). The RCAMs are shown to be asymptotically stable in both synchronous and asynchronous (sequential) update modes as long as their weighting functions are continuous and monotone nondecreasing. In particular, a high-capacity RCAM named the exponential correlation associative memory (ECAM) is proposed. The asymptotic storage capacity of the ECAM scales exponentially with the length of memory patterns, and it meets the ultimate upper bound for the capacity of associative memories. The asymptotic storage capacity of the ECAM with limited dynamic range in its exponentiation nodes is found to be proportional to that dynamic range. Design and fabrication of a 3-mm CMOS ECAM chip is reported. The prototype chip can store 32 24-bit memory patterns, and its speed is higher than one associative recall operation every 3 µs. An application of the ECAM chip to vector quantization is also described.https://resolver.caltech.edu/CaltechAUTHORS:20190314-142001270The reliability of semiconductor RAM memories with on-chip error-correction coding
https://resolver.caltech.edu/CaltechAUTHORS:20190314-142001446
Year: 1991
DOI: 10.1109/18.79957
The mean lifetimes are studied of semiconductor memories that have been encoded with an on-chip single error-correcting code along each row of memory cells. Specifically, the effects of single-cell soft errors and various hardware failures (single-cell, row, column, row-column, and entire chip) in the presence of soft-error scrubbing are examined. An expression is presented for computing the mean time to failure of such memories in the presence of these types of errors using the Poisson approximation; the expression has been confirmed experimentally to accurately model the mean time to failure of memories protected by single error-correcting codes. These analyses will enable the system designer to accurately assess the improvement in mean time to failure (MTTF) achieved by the use of error-control coding.https://resolver.caltech.edu/CaltechAUTHORS:20190314-142001446An information theoretic approach to rule induction from databases
https://resolver.caltech.edu/CaltechAUTHORS:20190314-155127061
Year: 1992
DOI: 10.1109/69.149926
The knowledge acquisition bottleneck in obtaining
rules directly from an expert is well known. Hence, the problem
of automated rule acquisition from data is a well-motivated one,
particularly for domains where a database of sample data exists.
In this paper we introduce a novel algorithm for the induction
of rules from examples. The algorithm is novel in the sense
that it not only learns rules for a given concept (classification),
but it simultaneously learns rules relating multiple concepts.
This type of learning, known as generalized rule induction is
considerably more general than existing algorithms which tend
to be classification oriented. Initially we focus on the problem of
determining a quantitative, well-defined rule preference measure.
In particular, we propose a quantity called the J-measure as
an information theoretic alternative to existing approaches. The
J-measure quantifies the information content of a rule or a
hypothesis. We will outline the information theoretic origins
of this measure and examine its plausibility as a hypothesis
preference measure. We then define the ITRULE algorithm which
uses the newly proposed measure to learn a set of optimal rules
from a set of data samples, and we conclude the paper with an
analysis of experimental results on real-world data.https://resolver.caltech.edu/CaltechAUTHORS:20190314-155127061Rule-based neural networks for classification and probability estimation
https://resolver.caltech.edu/CaltechAUTHORS:GOOnc92
Year: 1992
DOI: 10.1162/neco.1992.4.6.781
In this paper we propose a network architecture that combines a rule-based approach with that of the neural network paradigm. Our primary motivation for this is to ensure that the knowledge embodied in the network is explicitly encoded in the form of understandable rules. This enables the network's decision to be understood, and provides an audit trail of how that decision was arrived at. We utilize an information theoretic approach to learning a model of the domain knowledge from examples. This model takes the form of a set of probabilistic conjunctive rules between discrete input evidence variables and output class variables. These rules are then mapped onto the weights and nodes of a feedforward neural network resulting in a directly specified architecture. The network acts as parallel Bayesian classifier, but more importantly, can also output posterior probability estimates of the class variables. Empirical tests on a number of data sets show that the rule-based classifier performs comparably with standard neural network classifiers, while possessing unique advantages in terms of knowledge representation and probability estimation.https://resolver.caltech.edu/CaltechAUTHORS:GOOnc92Phased burst error-correcting array codes
https://resolver.caltech.edu/CaltechAUTHORS:GOOieeetit93
Year: 1993
DOI: 10.1109/18.212304
Various aspects of single-phased burst-error-correcting array codes are explored. These codes are composed of two-dimensional arrays with row and column parities with a diagonally cyclic readout order; they are capable of correcting a single burst error along one diagonal. Optimal codeword sizes are found to have dimensions n1×n2 such that n2 is the smallest prime number larger than n1. These codes are capable of reaching the Singleton bound. A new type of error, approximate errors, is defined; in q-ary applications, these errors cause data to be slightly corrupted and therefore still close to the true data level. Phased burst array codes can be tailored to correct these codes with even higher rates than beforehttps://resolver.caltech.edu/CaltechAUTHORS:GOOieeetit93On loss functions which minimize to conditional expected values and posterior probabilities
https://resolver.caltech.edu/CaltechAUTHORS:20190314-155127224
Year: 1993
DOI: 10.1109/18.243457
A loss function, or objective function, is a function used to compare parameters when fitting a model to data. The loss function gives a distance between the model output and the desired output. Two common examples are the squared-error loss function and the cross entropy loss function. Minimizing the mean-square error loss function is equivalent to minimizing the mean square difference between the model output and the expected value of the output given a particular input. This property of minimization to the expected value is formalized as P-admissibility. The necessary and sufficient conditions for P-admissibility, leading to a parametric description of all P-admissible loss functions, are found. In particular, it is shown that two of the simplest members of this class of functions are the squared error and the cross entropy loss functions. One application of this work is in the choice of a loss function for training neural networks to provide probability estimates.https://resolver.caltech.edu/CaltechAUTHORS:20190314-155127224Learning finite state machines with self-clustering recurrent networks
https://resolver.caltech.edu/CaltechAUTHORS:ZENnc93
Year: 1993
DOI: 10.1162/neco.1993.5.6.976
Recent work has shown that recurrent neural networks have the ability to learn finite state automata from examples. In particular, networks using second-order units have been successful at this task. In studying the performance and learning behavior of such networks we have found that the second-order network model attempts to form clusters in activation space as its internal representation of states. However, these learned states become unstable as longer and longer test input strings are presented to the network. In essence, the network "forgets" where the individual states are in activation space. In this paper we propose a new method to force such a network to learn stable states by introducing discretization into the network and using a pseudo-gradient learning rule to perform training. The essence of the learning rule is that in doing gradient descent, it makes use of the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function, while still using the discretized value in the feedback update path. The new structure uses isolated points in activation space instead of vague clusters as its internal representation of states. It is shown to have similar capabilities in learning finite state automata as the original network, but without the instability problem. The proposed pseudo-gradient learning rule may also be used as a basis for training other types of networks that have hard-limiting threshold activation functions.https://resolver.caltech.edu/CaltechAUTHORS:ZENnc93Fuzzy rule-based networks for control
https://resolver.caltech.edu/CaltechAUTHORS:20190315-142400048
Year: 1994
DOI: 10.1109/91.273129
We present a method for learning fuzzy logic membership functions and rules to approximate a numerical function from a set of examples of the function's independent variables and the resulting function value. This method uses a three-step approach to building a complete function approximation system: first, learning the membership functions and creating a cell-based rule representation; second, simplifying the cell-based rules using an information-theoretic approach for induction of rules from discrete-valued data; and, finally, constructing a computational (neural) network to compute the function value given its independent variables. This function approximation system is demonstrated with a simple control example: learning the truck and trailer backer-upper control system.https://resolver.caltech.edu/CaltechAUTHORS:20190315-142400048Discrete recurrent neural networks for grammatical inference
https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359688
Year: 1994
DOI: 10.1109/72.279194
We describe a novel neural architecture for learning deterministic context-free grammars, or equivalently, deterministic pushdown automata. The unique feature of the proposed network is that it forms stable state representations during learning-previous work has shown that conventional analog recurrent networks can be inherently unstable in that they cannot retain their state memory for long input strings. We have recently introduced the discrete recurrent network architecture for learning finite-state automata. Here we extend this model to include a discrete external stack with discrete symbols. A composite error function is described to handle the different situations encountered in learning. The pseudo-gradient learning method (introduced in previous work) is in turn extended for the minimization of these error functions. Empirical trials validating the effectiveness of the pseudo-gradient learning method are presented, for networks both with and without an external stack. Experimental results show that the new networks are successful in learning some simple pushdown automata, though overfitting and non-convergent learning can also occur. Once learned, the internal representation of the network is provably stable; i.e., it classifies unseen strings of arbitrary length with 100% accuracy.https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359688Learning texture discrimination rules in a multiresolution system
https://resolver.caltech.edu/CaltechAUTHORS:20190315-142400129
Year: 1994
DOI: 10.1109/34.310685
We describe a texture analysis system in which informative discrimination rules are learned from a multiresolution representation of time textured input. The system incorporates unsupervised and supervised learning via statistical machine learning and rule-based neural networks, respectively. The textured input is represented in the frequency-orientation space via a log-Gabor pyramidal decomposition. In the unsupervised learning stage a statistical clustering scheme is used for the quantization of the feature-vector attributes. A supervised stage follows in which labeling of the textured map is achieved using a rule-based network. Simulation results for the texture classification task are given. An application of the system to real-world problems is demonstrated.https://resolver.caltech.edu/CaltechAUTHORS:20190315-142400129Analog VLSI implementation for stereo correspondence between 2-D images
https://resolver.caltech.edu/CaltechAUTHORS:20190429-151824492
Year: 1996
DOI: 10.1109/72.485630
Many robotics and navigation systems utilizing stereopsis to determine depth have rigid size and power constraints and require direct physical implementation of the stereo algorithm. The main challenges lie in managing the communication between image sensor and image processor arrays, and in parallelizing the computation to determine stereo correspondence between image pixels in real-time. This paper describes the first comprehensive system level demonstration of a dedicated low-power analog VLSI (very large scale integration) architecture for stereo correspondence suitable for real-time implementation. The inputs to the implemented chip are the ordered pixels from a stereo image pair, and the output is a two-dimensional disparity map. The approach combines biologically inspired silicon modeling with the necessary interfacing options for a complete practical solution that can be built with currently available technology in a compact package. Furthermore, the strategy employed considers multiple factors that may degrade performance, including the spatial correlations in images and the inherent accuracy limitations of analog hardware, and augments the design with countermeasures.https://resolver.caltech.edu/CaltechAUTHORS:20190429-151824492Neural networks applied to traffic management in telephone networks
https://resolver.caltech.edu/CaltechAUTHORS:20190409-152840510
Year: 1996
DOI: 10.1109/5.537108
In this paper the application of neural networks to some of the network management tasks carried out in a regional Bell telephone company is described. Network managers monitor the telephone network for abnormal conditions and have the ability to place controls in the network to improve traffic flow. Conclusions are drawn regarding the utility and effectiveness of the neural networks in automating the network management tasks.https://resolver.caltech.edu/CaltechAUTHORS:20190409-152840510Analog VLSI system for active drag reduction
https://resolver.caltech.edu/CaltechAUTHORS:20190429-151824260
Year: 1996
DOI: 10.1109/40.540081
Drawing inspiration from the structure of shark skin, the authors are building a system to reduce drag along a surface. The entire question of active control of shark skin is speculative. Biologists hypothesize that sharks actively move their denticles. Indirect evidence of this is twofold. The denticles connect to muscles underneath the shark's skin. The total number of mechanoreceptive pressure sensors (pit organs) and their placement on a shark's body positively correlate with the speed of the species. For good active control, the shark may need many sensors to relay the current condition over its body. Although questions remain about sharks using active control, we concluded from this biological example that it may be beneficial to use controlled microscopic structures to reduce drag.https://resolver.caltech.edu/CaltechAUTHORS:20190429-151824260An Autonomous Water Vapor Plume Tracking Robot Using Passive Resistive Polymer Sensors
https://resolver.caltech.edu/CaltechAUTHORS:20191112-101402202
Year: 2000
DOI: 10.1023/a:1008970418316
A simple reactive robot is described which is capable of tracking a water vapor plume to its source. The robot acts completely within the plume and is endowed with no deliberate information about wind direction or speed, yet accurately tracks the plume upstream. The robot's behavior, results from the behavior of simple resistive polymer sensors and their strategic placement on the robot's body.https://resolver.caltech.edu/CaltechAUTHORS:20191112-101402202Analog VLSI neural network with digital perturbative learning
https://resolver.caltech.edu/CaltechAUTHORS:20190326-072808079
Year: 2002
DOI: 10.1109/TCSII.2002.802282
Two feed-forward neural-network hardware implementations are presented. The first uses analog synapses and neurons with a digital serial weight bus. The chip is trained in loop with the computer performing control and weight updates. By training with the chip in the loop, it is possible to learn around circuit offsets. The second neural network also uses a computer for the global control operations, but all of the local operations are performed on chip. The weights are implemented digitally, and counters are used to adjust them. A parallel perturbative weight update algorithm is used. The chip uses multiple, locally generated, pseudorandom bit streams to perturb all of the weights in parallel. If the perturbation causes the error function to decrease, the weight change is kept; otherwise, it is discarded. Test results from a very large scale integration (VLSI) prototype are shown of both networks successfully learning digital functions such as AND and XOR.https://resolver.caltech.edu/CaltechAUTHORS:20190326-072808079Distributed odor source localization
https://resolver.caltech.edu/CaltechAUTHORS:20190403-142504387
Year: 2002
DOI: 10.1109/JSEN.2002.800682
This paper presents an investigation of odor localization by groups of autonomous mobile robots. First, we describe a distributed algorithm by which groups of agents can solve the full odor localization task. Next, we establish that conducting polymer-based odor sensors possess the combination of speed and sensitivity necessary to enable real world odor plume tracing and we demonstrate that simple local position, odor, and flow information, tightly coupled with robot behavior, is sufficient to allow a robot to localize the source of an odor plume. Finally, we show that elementary communication among a group of agents can increase the efficiency of the odor localization system performance.https://resolver.caltech.edu/CaltechAUTHORS:20190403-142504387