Committee Feed
https://feeds.library.caltech.edu/people/Kajiya-J-T/committee.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 15:28:49 +0000Silicon Compilation
https://resolver.caltech.edu/CaltechETD:etd-11092006-140405
Authors: {'items': [{'id': 'Johannsen-David-Lawrence', 'name': {'family': 'Johannsen', 'given': 'David Lawrence'}, 'show_email': 'NO'}]}
Year: 1981
DOI: 10.7907/32ha-8453
<p>Modern integrated circuits are among the most complex systems designed by man. Although we have seen a rapid increase in fabrication technology, traditional design methodologies have not evolved at a rate commensurate with the increasing design complexity potential. These circuit design methodologies fail when applied to Very Large Scale Integrated (VLSI) circuit design. This thesis proposes a new design methodology which manages the complexity VLSI design, allowing economical generation of correctly functioning circuits.</p>
<p>Cost is one measurement of a design methodology's value. A good design methodology rapidly and efficiently translates high level system specifications into working parts. Traditional techniques partition the translation process into many steps; each design tool is focused upon one of these design steps. This partitioning precludes the consideration of global constraints, and introduces a literal explosion of data being transfered between design steps. The design process becomes error-prone and time consuming.</p>
<p>The technique of silicon compilation presented in this thesis automatically translates from high level specifications into correct geometric descriptions. In this approach, the designer interacts at a high level of abstraction, and need not be concerned with lower levels of detail, facilitating exploration of alternate system architectures. Furthermore, since the implementation is algorithmically generated, chip descriptions can be made correct by construction. Finally, the user is given technology independence, because the high level specification need not require knowledge of fabrication details. This flexibility allows the user to take advantage of technology advances.</p>
<p>This thesis explores various aspects of silicon compilation, and presents a prototype compiler, Bristle Blocks. The methodology is demonstrated through the design of several chips. The practicality of the methodology results from the concern for efficiency of the design process and of the chip designs produced by the system.</p>https://thesis.library.caltech.edu/id/eprint/4476A Spatiotemporal Probe of the Human Visual System by Application of Nonlinear Systems Identification Theory
https://resolver.caltech.edu/CaltechETD:etd-09132006-154529
Authors: {'items': [{'id': 'Chen-Michael-Jiu-Wei', 'name': {'family': 'Chen', 'given': 'Michael Jiu-Wei'}, 'show_email': 'NO'}]}
Year: 1982
DOI: 10.7907/f613-bd08
<p>This thesis describes an attempt to apply signal processing and systems theory to the task of analyzing and interpreting evoked potential data and locating evoked potential sources by physical principles. Random impulse trains were used as inputs to characterize the human visual system. The method is analogous to the Wiener method for a continuous Gaussian white noise input. The restricted-diagonal Volterra series for discrete inputs is used by making certain restrictions on the integrals in a Volterra series. A modification of Lee and Schetzen's method was used in the estimation of the kernels.</p>
<p>Forty-channel first-order kernels were computed for briefly appearing checkerboard patterns placed in left or right visual fields. The measured potential distribution showed a radical dependence on stimulus locus. Equivalent dipoles generally give excellent fits to the measured data, and the mapping between the visual field and these equivalent sources is similar to the commonly accepted mapping between the visual field and the visual cortex. Also, the results resemble those using conventional signal averaging.</p>
<p>First order kernels show better signal-to-noise ratio when compared to conventional signal averaging for the same experiment duration. Multichannel first-order kernels show that sources from early components are deep in the head as expected and in a believable region.</p>
<p>Results for the second-order kernels reveal occlusive interactions in the visual system and are interpreted relative to the first-order kernel. These inhibitions display different lengths of memories which suggest that they might arise from different neural origins.</p>
https://thesis.library.caltech.edu/id/eprint/3524The Extension of Object-Oriented Languages to a Homogeneous, Concurrent Architecture
https://resolver.caltech.edu/CaltechETD:etd-09142006-085516
Authors: {'items': [{'email': 'dick@lang.org', 'id': 'Lang-Charles-Richard-Jr', 'name': {'family': 'Lang', 'given': 'Charles Richard, Jr.'}, 'show_email': 'NO'}]}
Year: 1982
DOI: 10.7907/9EVC-2X08
<p>A homogeneous machine architecture, consisting of a regular interconnection of many identical elements, exploits the economic benefits of VLSI technology. A concurrent programming model is presented that is related to object oriented languages such as Simula and Smalltalk. Techniques are developed which permit the execution of general purpose object oriented programs on a homogeneous machine. Both the hardware architecture and the supporting software algorithms are demonstrated to scale their performance with the size of the system.</p>
<p>The program objects communicate by passing messages. Objects may move about in the system and may have an arbitrary pointer topology. A distributed, on-the-fly garbage collection algorithm is presented which operates by message passing. Simulation of the algorithm demonstrates its ability to collect obsolete objects over the entire machine with acceptable overhead costs. Algorithms for maintaining the locality of object references and for implementing a virtual object capability are also presented.</p>
<p>To insure the absence of hardware bottlenecks, a number of interconnection strategies are discussed and simulated for use in a homogeneous machine. Of those considered, the Boolean N-cube connection is demonstrated to provide the necessary characteristics.</p>
<p>The object oriented machine will provide increased performance as its size is increased. It can execute a general purpose, concurrent, object oriented language where the size of the machine and its interconnection topology are transparent to the programmer.</p>https://thesis.library.caltech.edu/id/eprint/3534Space-time Algorithms: Semantics and Methodology
https://resolver.caltech.edu/CaltechETD:etd-08312006-094203
Authors: {'items': [{'id': 'Chen-Marina-Chien-mei', 'name': {'family': 'Chen', 'given': 'Marina Chien-mei'}, 'show_email': 'NO'}]}
Year: 1983
DOI: 10.7907/bfpj-t811
<p>A methodology for specifying concurrent systems is presented. A model of computation for concurrent systems is presented first. The syntax and semantics of the language CRYSTAL are introduced. The specification of a system is called a space-time algorithm since space and time are explicit parameters in the description. Fixed-point semantics is used for abstracting the behavior of a system from its implementation. The consistency between an implementation and its description can therefore be ensured using this method. Formal semantics for an arbitrary transistor network is given. An "interpreter" for space-time algorithms -- a hierarchical simulator -- for VLSI systems is presented. The framework can be viewed as a concurrent programming notation when describing communicating processes and as a hardware description notation when specifying integrated circuits.</p>https://thesis.library.caltech.edu/id/eprint/3296Robust Sentence Analysis and Habitability
https://resolver.caltech.edu/CaltechETD:etd-11032005-154728
Authors: {'items': [{'id': 'Trawick-David-James', 'name': {'family': 'Trawick', 'given': 'David James'}, 'show_email': 'NO'}]}
Year: 1983
DOI: 10.7907/re17-h091
<p>Systems for using subsets of English with computers have progressed much in the area of linguistic coverage of well-formed sentences for a specific task. Some methods have also been devised for the treatment of input that is almost well-formed. Nevertheless, it is still quite easy to stray over the bounds imposed by current natural language systems. Without proper diagnosis, this leads to interactive systems that are not habitable, i.e., systems that are not pleasant to use because they are not able to perform up to the user's expectations.</p>
<p>This thesis presents an overall system for the treatment of several areas normally outside the limit of natural language systems, and for the diagnosis of any input. The system, Robust Sentence Analysis, includes procedures for handling ambiguous input, resolving input with anaphors (e.g. pronouns), making several kinds of major and minor corrections to input, and the interaction of all of these areas. The system does not treat every aspect of these methods of human interaction, but does provide for the more prevalent forms as found in simulations of user interaction in several modes: face-to-face, terminal-to-terminal, and human-to-computer (using a previously implemented natural language system). Thus the system incorporates the most likely forms found in human performance. Diagnostics are designed to lead the user back into the boundaries of the system.</p>
<p>The Robust Sentence Analysis system is implemented as a part of the ASK System, <u>A</u> <u>S</u>imple <u>K</u>nowledgeable System.</p>https://thesis.library.caltech.edu/id/eprint/4393Parallel Machines for Computer Graphics
https://resolver.caltech.edu/CaltechETD:etd-11092005-140159
Authors: {'items': [{'id': 'Ullner-Michael-K', 'name': {'family': 'Ullner', 'given': 'Michael K.'}, 'show_email': 'NO'}]}
Year: 1983
DOI: 10.7907/wxmq-sx43
<p>Computer graphics provides some ideal applications for the kind of highly parallel implementations made possible by advances in integrated circuit technology. Specifically, hidden line and hidden surface algorithms, while easily defined and simple in concept, entail a substantial amount of computation. This requirement fits the characteristics of integrated circuit technology, where modular designs involving regular communication between many concurrent operations are rewarded with high performance at an acceptable cost.</p>
<p>Ray tracing is a very flexible technique that can be used to produce some of the most realistic of all computer generated images by simulating the interactions of light rays with surfaces in a modeled scene. Because light rays are mutually independent, many may be processed simultaneously, and the potential for concurrency is great. One architecture for expediting a ray tracing algorithm consists of a conventional computer equipped with a special purpose peripheral device for locating the intersections of rays and surfaces. This intersection computation is the most time consuming aspect of a ray tracing algorithm. Although the attached processor configuration can produce images more quickly than an unaided computer, its performance is limited. Alternatively, a pipeline of surface processors can replace the peripheral device. Each processor computes the intersections of its stored surface with rays that flow through the pipe. Such a machine machine can be quite fast, and its performance can be increased by lengthening the pipeline, but the component processors are not very effectively utilized. A third approach combines the advantages of the prior two machines by using an array of processors, each simulating a distinct subvolume of the modeled world by treating light rays traveling through space as messages flowing between processors. Local communication is sufficient because light rays travel continuously through space.</p>
<p>In real time computer graphics, successive images must be produced in times that are imperceptible to a viewer. Although the ray tracing machines fall short of this performance, it is possible to compromise image quality in order to produce a highly parallel machine capable of real time operation. The processors in such a machine are organized to form a binary tree. Leaf processors scan-convert surfaces, producing a sequence of segments, where a segment is the portion of a surface that appears on a single scan line of the display. Processors towards the root of the tree accept two such segment sequences and produce a third in which all segment overlap has been resolved. The final image is available at the root of the tree. The communication bottleneck that would otherwise occur at the root can be eliminated by breaking out parallel roots, and the resulting tree may be extended to scenes of almost arbitrary complexity merely by increasing the supply of available processors.</p>
<p>Massive parallelism can also be applied to the problem of removing hidden edges from line drawings. A suitable architecture takes the form of a pipeline in which each processor is dedicated to the handling of a single polygon edge. These processors successively clip line segments passing through the pipeline to eliminate portions hidden behind surfaces. Each edge processor can be constructed out of little more than three serial multipliers.</p>
<p>The machines described here are varied in organization, and each functions differently, but their treatment of sorting is one ingredient common to all. Sorting is a key component of hidden surface algorithms running on conventional computers, but its extensive communication requirements make it costly for use in a highly integrated design. Consequently, the highly parallel machines described here operate largely without sorting. Instead, they maintain information in sorted order or make use of already sorted information to limit communication requirements.</p>https://thesis.library.caltech.edu/id/eprint/4471Automated Performance Optimization of Custom Integrated Circuits
https://resolver.caltech.edu/CaltechETD:etd-11072005-081513
Authors: {'items': [{'email': 'steve.trimberger@xilinx.com', 'id': 'Trimberger-Stephen-Mathias', 'name': {'family': 'Trimberger', 'given': 'Stephen Mathias'}, 'show_email': 'NO'}]}
Year: 1983
DOI: 10.7907/8YDZ-G637
<p>The complexity of integrated circuits requires a hierarchical design methodology that allows the user to divide the problem into pieces, design each piece independently, and assemble the pieces into the complete system. The design hierarchy brings out composition problems, problems that are a property of the assembly as a whole, not of one single instance in the hierarchy.</p>
<p>Recent research has produced tools that automate part of the composition task - the logical connection of the pieces. However, these tools do not ensure that signals driven over these connections will be driven sufficiently to give reasonable cycle speed of the resulting chips. It is easily possible to specify an assembly in which a small-sized gate is required to drive an enormous load. Parasitic capacitance of the wiring made automatically by the logical connection tool can be the dominant source of delay, so assembly tools can actually worsen the performance of the circuit and hide this fact from the designer.</p>
<p>When required to make large circuits, automated layout tools such as PLA generators can blindly make layouts that give abysmally poor performance. Here again, the delay is in a part of circuit that the designer did not specify, so it is hidden. Finding and correcting these problems is a difficult and time-consuming task in integrated circuit design, and one that consumes vastly more people's time and computer time than the simple assembly of the chip.</p>
<p>The task of guaranteeing that circuits meet performance specifications has been left mainly to the designer. Computer aided design has provided analysis tools, tools that tell the designer the performance statistics of the current design. It is then the designer's burden to interpret the performance statistics and use them as guides to make changes in the circuit.</p>
<p>This thesis views performance optimization as an electrical composition task. Poor performance as a result of mismatched loads on devices is a problem of composition and should be corrected by the composition tool. Such a tool is presented in this thesis -- a program that automatically sizes transistors in a symbolic description of a chip to match the load the transistors are driving. The results are encouraging: they show that delays can be cut by a factor of two in many current designs.</p>https://thesis.library.caltech.edu/id/eprint/4438A Tracking Phase Vocoder and its Use in the Analysis of Ensemble Sounds
https://resolver.caltech.edu/CaltechETD:etd-08312006-105836
Authors: {'items': [{'id': 'Dolson-Mark-Barry', 'name': {'family': 'Dolson', 'given': 'Mark Barry'}, 'show_email': 'NO'}]}
Year: 1983
DOI: 10.7907/0RC3-V146
<p>Additive analysis-synthesis using the phase vocoder is a powerful tool for the exploration of musical timbre. In this research, previous investigations of this subject are extended in two significant directions.</p>
<p>First, an improved analysis of the phase vocoder is developed to explain the errors introduced by undersampling and modification of the magnitude and phase-derivative signals. Two sources of error are identified. It is shown that the first of these involves crosstalk between adjacent frequency channels, and can be eliminated through the development of a tracking version of the phase vocoder. Alternatively, restrictions can be placed on the phase-derivative signal to preserve the absolute phase. The second source of error appears to be inherent in the phase vocoder formulation.</p>
<p>Secondly, the tracking phase vocoder is used to investigate differences between solo and ensemble sounds. A search is conducted for the minimal set of cues which will produce an ensemble sensation. It is shown that the primary requirement is that there be at least four to eight harmonics, each of which has a characteristic amplitude modulation proportional to its frequency. In addition, a number of issues related to the quality of the ensemble sensation and its efficient synthesis are examined.</p>https://thesis.library.caltech.edu/id/eprint/3297VLSI Computational Structures Applied to Fingerprint Image Analysis
https://resolver.caltech.edu/CaltechTHESIS:03202012-091934255
Authors: {'items': [{'id': 'Megdal-Barry-Bruce', 'name': {'family': 'Megdal', 'given': 'Barry Bruce'}, 'show_email': 'NO'}]}
Year: 1983
DOI: 10.7907/9tyn-bc11
<p>Advances in integrated circuit technology have made possible the application of LSI and VLSI techniques to a wide range of computational problems. Image processing is one of the areas that stands to benefit most from these techniques. This thesis presents an architecture suitable for VLSI implementations which enables a wide range of image processing operations to be done in a real-time, pipelined fashion. These operations include filtering, thresholding, thinning and feature extraction.</p>
<p>The particular class of images chosen for study are fingerprints. There exists a long history of fingerprint classification and comparison techniques used by humans, but previous attempts at automation have met with little success. This thesis makes use of VLSI image processing operations to create a graph structure representation (minutia graph) of the inter-relationships of various low-level features of fingerprint images. An approach is then presented which allows derivation of a metric for the similarity of these graphs and of the fingerprints which they represent. An efficient algorithm for derivation of maximal common subgraphs of two minutia graphs serves as the basis for computation of this metric, and is itself based upon a specialized clique-finding algorithm. Results of cross comparison of fingerprints from multiple individuals are presented.</p>https://thesis.library.caltech.edu/id/eprint/6855Biophysical Source Modeling of Some Exogenous and Endogenous Components of the Human Event-Related Potential
https://resolver.caltech.edu/CaltechETD:etd-11072005-143659
Authors: {'items': [{'email': 'eriksenj@ohsu.edu', 'id': 'Eriksen-K-Jeffrey', 'name': {'family': 'Eriksen', 'given': 'K. Jeffrey'}, 'show_email': 'YES'}]}
Year: 1984
DOI: 10.7907/SPSG-FX90
<p>Methods of dipole localization were applied to human scalp-recorded electrical activity associated with a simple auditory cognitive discrimination task.</p>
<p>Human neuroanatomy and neurophysiology were reviewed from a biophysical standpoint in order to describe the probable neurogenesis of electrical activity in the brain and on the surface of the head. Topographic electroencephalography (EEG) analysis and source localization methods were historically reviewed in detail, followed by a brief review of the history of non-invasive evoked potential (EP) and magnetic field measurements of human central nervous system activity.</p>
<p>Four well known simple cognitive tasks were considered that were known to elicit non-obligatory brain responses, and the odd-ball task chosen. Three subjects listened to a series of two tones, one frequent and one rare, and counted the rare tones. During task performance, 40 to 46 channels of EEG activity were recorded from their scalps.</p>
<p>From the EEG data, average evoked potentials (aEP) were calculated for the frequent and rare conditions. From these a difference response was calculated. All three of these EPs were plotted as equipotential maps over a schematic of a head for topographic display and the major distribution features discussed. These aEPs and maps matched those previously reported in the literature.</p>
<p>From estimates of the spatial electrical power over the head, four peak components were selected for analysis by equivalent source modeling (ESM). These were designated the FP40, FP100, FP200, and FP350, where FP stands for field power. ESM demonstrated that one centrally located point dipole or two bilaterally symmetric dipoles could model the empirical data quite well. These results were discussed in relation to other topographic studies, as well as studies of intracranial recordings, lesions, and animal models. The source locations found were consistent with auditory cortical locations for the obligatory sensory peaks (FP40, FP100, FP200) and with brainstem locations as the source of the FP350 cognitive event-related peak.</p>https://thesis.library.caltech.edu/id/eprint/4439The Dialogue Designing Dialogue System
https://resolver.caltech.edu/CaltechETD:etd-01022007-104438
Authors: {'items': [{'id': 'Ho-Tai-Ping', 'name': {'family': 'Ho', 'given': 'Tai-Ping'}, 'show_email': 'NO'}]}
Year: 1984
DOI: 10.7907/5v76-gn68
<p>This thesis presents an interactive system, the Dialogue Designing Dialogue System, that integrates natural language programming of user dialogues with a natural language system, the ASK system. This interactive system satisfies the basic criteria of a general programming language.</p>
<p>The system presented in this thesis may be referred to as a "meta-dialogue" system. Using this meta-dialogue, the user implements domain specific dialogues which he and others can then use, providing highly succinct and efficient interfaces for interaction with the computer.</p>
<p>The system combines the use of a syntax-directed and a semantic-directed system, which gives the user flexibility in specifying additional capabilities, and thus in turn gives the system itself a much broader domain of application. Further, the system integrates natural language programming, dialogue directed user interface, underlying data base, and text handling capabilities, so that it does not require users to have programming background in order to establish an application system for themselves.</p>https://thesis.library.caltech.edu/id/eprint/2Heterogeneous Data Base Access
https://resolver.caltech.edu/CaltechETD:etd-01022007-110210
Authors: {'items': [{'id': 'Papachristidis-Alexandros-Christou', 'name': {'family': 'Papachristidis', 'given': 'Alexandros Christou'}, 'show_email': 'NO'}]}
Year: 1984
DOI: 10.7907/5sdf-7w07
<p>This thesis presents a design for accessing commercially available data bases and other data base systems from a given, "local" data base system. Using this design, data from the local data base and from one or more "foreign" data bases will be integrated in framing the response to a single query. The design does not presume any standardization or integration of any kind on the part of a target data base.</p>
<p>An expert system has been designed, to be used by the applications programmer, for acquiring the information about a target data base, the communication path with the target computer, and the protocol for sending a data retrieval request. It also obtains from the applications programmer the nature of the data to be accessed and instructions for converting the incoming data into the "local" system's own format. This information is then associated with each word referring to data residing in the "foreign" data base, and is stored in the lexicon of the "local" system. The design also includes the extension of the underlying, host, system so that this information is properly used at the appropriate points in the processing of a user query.</p>
<p>State-of-the-art techniques used for interfacing heterogeneous data bases and natural language front-ends to data base systems are examined. The unique accomplishments of this thesis are identified. At the present time, there are no other systems, either commercial or research, that provide the heterogeneous access capabilities of the design presented here.</p>
<p>The design of Heterogeneous Data Base Access has been implemented as a part of the ASK System, A Simple Knowledgable System, providing natural language interface with the user.</p>https://thesis.library.caltech.edu/id/eprint/3Bit-Serial Reed-Solomon Decoders in VLSI
https://resolver.caltech.edu/CaltechETD:etd-03252008-090414
Authors: {'items': [{'id': 'Whiting-Douglas-Lee', 'name': {'family': 'Whiting', 'given': 'Douglas Lee'}, 'show_email': 'NO'}]}
Year: 1985
DOI: 10.7907/bjd1-9j44
<p>Reed-Solomon codes are known to provide excellent error-correcting capabilities on many types of communication channels. Although efficient decoding algorithms have been known for over fifteen years, currently available decoder systems are large both in size and in power consumption. Such systems typically use a single, very fast, fully parallel finite-field multiplier in a sequential architecture. Thus, more processing time is required as the code redundancy increases. By using many arithmetic units on a single chip, it is possible to exploit the concurrency inherent in the decoding algorithms to attain performance levels previously possible only with large ECL systems.</p>
<p>An investigation into the structure of binary extension fields reveals that the common arithmetic operations used in decoding can be implemented quite efficiently in a bit-serial fashion, using any of several bases over GF(2). Berlekamp's dual-basis multiplier is generalized to the product of two arbitrary field elements, and a necessary and sufficient condition is then derived for the existence of a self-dual basis. Efficient methods for bit-serial multiplicative inversion are also discussed, greatly reducing the complexity traditionally associated with this operation.</p>
<p>Using these bit-serial techniques, several architectures for implementing each phase of the known Reed-Solomon decoding algorithms are presented and compared. Simple methods are presented to allow power-sum syndrome decoders to handle codes with a variety of block lengths and redundancies. Each approach comes within a factor of log <i>n</i> (where <i>n</i> is the block length of the code) of the recently derived asymptotic lower bounds for both time and area. Results from a student project to lay out a prototype decoder chip using the Berlekamp-Massey algorithm are also discussed. By utilizing the parallelism inherent in the key equation solution, these architectures can decode received words at a speed independent of the redundancy of the code.</p>https://thesis.library.caltech.edu/id/eprint/1117Combining Computation with Geometry
https://resolver.caltech.edu/CaltechETD:etd-04102008-142130
Authors: {'items': [{'id': 'Lien-Sheue-Ling-Chang', 'name': {'family': 'Lien', 'given': 'Sheue-Ling Chang'}, 'show_email': 'NO'}]}
Year: 1985
DOI: 10.7907/n1qe-h846
<p>This thesis seeks to establish mathematical principles and to provide efficient solutions to various time consuming operations in computer-aided geometric design. It contains a discussion of three major topics: (1) design validation by means of object interference detection, (2) object reconstruction through the union, intersection, and subtraction of two polyhedra, and (3) calculation of basic engineering properties such as volume, center of mass, or moments of inertia.</p>
<p>Two criteria are presented for solving the problems of point-polygon enclosure and point-polyhedron enclosure in object interference detection. An algorithm for efficient point-polyhedron-enclosure detection is presented. Singularities encountered in point-polyhedron-enclosure detection are categorized and simple methods for resolving them are also included.</p>
<p>A new scheme for representing solid objects, called skeletal polyhedron representation, is proposed. Also included are algorithms for performing set operations on polyhedra (or polygons) represented in skeletal polyhedron representation, algorithms for performing edge-edge intersection and face-face intersection in a set operation, and a perturbation method which can be used to resolve singularities for an easy execution of edge-edge intersection and face-face intersection.</p>
<p>A symbolic method for calculating basic engineering properties (such as volume, center of mass, moments of inertia, and similar integral properties of geometrically complex solids) is proposed. The same method is generalized for computing the integral properties of a set combined polyhedron, and for computing the integral properties of an arbitrary polyhedron in m-dimensional (R<sup>m</sup>) space.</p>https://thesis.library.caltech.edu/id/eprint/1333Part I. Fold Continuation and the Flow Between Rotating, Coaxial Disks. Part II. Equilibrium Chaos. Part III. A Mesh Selection Algorithm for Two-Point Boundary Value Problems
https://resolver.caltech.edu/CaltechETD:etd-03262008-150456
Authors: {'items': [{'id': 'Fier-Jeffrey-Michael', 'name': {'family': 'Fier', 'given': 'Jeffrey Michael'}, 'show_email': 'NO'}]}
Year: 1985
DOI: 10.7907/cs9b-ft10
<p>Part I:</p>
<p>We consider folds in the solution surface of nonlinear equations with two free parameters. A system of equations whose solutions are fold paths is formulated and proved to be non-singular in a neighborhood of a fold, thus making continuation possible. Efficient numerical algorithms employing block Gaussian elimination are developed for applying Euler-Newton pseudo-arclength continuation to the system, and these are shown to require fewer operations than other methods.</p>
<p>To demonstrate the use of these methods we calculate the flow between two infinite, rotating disks. For Reynold's number less than 1000, six separate solution sheets are found and completely described. Plots of 47 solutions for three values of the disk speed ratio and for Reynold's number equal to 625 are shown. These are compared with the solutions found by previous investigators.</p>
<p>Part II:</p>
<p>Two ordinary differential equations with parameters whose solution paths exhibit an infinite sequence of folds clustered about a limiting value are studied. Using phase-plane analysis, expressions for the limiting ratios of the parameter values at which these folds occur are derived and the limiting values are shown to be non-universal.</p>
<p>Part III:</p>
<p>A mesh selection algorithm for use in a code to solve first-order nonlinear two-point boundary value problems with separated end conditions is described. The method is based on equidistributing the global error of the box scheme, a numerical estimate of which is obtained from Richardson extrapolation. Details of the algorithm and examples of its performance on non-stiff and stiff problems are presented.</p>
https://thesis.library.caltech.edu/id/eprint/1162ANIMAC: A Multiprocessor Architecture for Real-Time Computer Animation
https://resolver.caltech.edu/CaltechETD:etd-03262008-092532
Authors: {'items': [{'id': 'Whelan-Daniel-Steven', 'name': {'family': 'Whelan', 'given': 'Daniel Steven'}, 'show_email': 'NO'}]}
Year: 1985
DOI: 10.7907/0qnw-g372
<p>Advances in integrated circuit technology have been largely responsible for the growth of the computer graphics industry. This technology promises additional growth through the remainder of the century. This dissertation addresses how this future technology can be harnessed and used to construct very high performance real-time computer graphics systems.</p>
<p>This thesis proposes a new architecture for real-time animation engines. The ANIMAC architecture achieves high performance by utilizing a two-dimensional array of processors that determine visible surfaces in parallel. An array of sixteen processors with only nearest neighbor interprocessor communications can produce real-time shadowed images of scenes containing 100, 000 triangles.</p>
<p>The ANIMAC architecture is based upon analysis and simulations of various parallelization techniques. These simulations suggest that the viewing space be spatially subdivided and that each processor produce a visible surface image for several viewing space subvolumes. Simple assignments of viewing space subvolumes to processors are shown to offer high parallel efficiencies.</p>
<p>Simulations of parallel algorithms were driven with data derived from real scenes since analysis of scene composition suggested that using simplistic models of scene composition might lead to incorrect results.</p>
<p>The ANIMAC architecture required the development of a shadowing algorithm which was tailored to its parallel environment. This algorithm separates shadowing into local and foreign effects. Its implementation allows individual processors to compute shadowing effects for their image regions utilizing only very local information.</p>
<p>The design of the ANIMAC processors makes extensive use of new VLSI architectures. A formerly proposed processor per object architecture is used to determine visible surfaces while new processor per object and processor per pixel architectures are used to determine shadowing effects.</p>
<p>It is estimated that the ANIMAC architecture can be realized in the early 1990's. Realizing this architecture will require considerable amounts of hardware and capital yet its cost will not be out of line when compared with today's real-time computer graphics systems.</p>https://thesis.library.caltech.edu/id/eprint/1155Monte Carlo Methods for 2-D Compaction
https://resolver.caltech.edu/CaltechETD:etd-03202008-091615
Authors: {'items': [{'id': 'Mosteller-Richard-Craig', 'name': {'family': 'Mosteller', 'given': 'Richard Craig'}, 'show_email': 'NO'}]}
Year: 1986
DOI: 10.7907/mwrq-t026
<p>A new method of compaction for VLSI circuits is presented. Compaction is done simultaneously in two dimensions and uses a Monte Carlo simulation method often referred to as simulated annealing for optimization. A new curvilinear representation for VLSI circuits, specifically chosen to make the compaction efficient, is developed. Experiments with numerous cells are presented that demonstrate this method to be as good as, or better than the hand compaction previously applied to these cells. Hand compaction was the best previously known method of compaction. An experimental evaluation is presented of how the run time complexity grows as the number, <i>N</i>, of objects in the circuit increases. The results of this evaluation indicates that the run time growth is order <i>O</i>(<i>N</i> log(<i>A</i>))<i>f</i>(<i>d</i>) where <i>f</i>(<i>d</i>) is a function of the density, <i>d</i>, and <i>A</i> is the initial cell area. The function <i>f</i>(<i>d</i>) appears to have negligible or no dependence on <i>N</i>. A hierarchical composition approach is developed which takes advantage of the capability of the curvilinear representation and the 2-dimensional compaction technique.</p>https://thesis.library.caltech.edu/id/eprint/1035A VLSI Architecture for Concurrent Data Structures
https://resolver.caltech.edu/CaltechETD:etd-03252008-140428
Authors: {'items': [{'id': 'Dally-William-James', 'name': {'family': 'Dally', 'given': 'William James'}, 'show_email': 'NO'}]}
Year: 1986
DOI: 10.7907/f8d5-x741
<p>Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and communication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures.</p>
<p>Concurrent Smailtalk (CST), a derivative of Smailtalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many <i>constituent objects</i> and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built.</p>
<p>The <i>balanced cube</i> is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary <i>n</i>-cube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in <i>O</i>(log <i>N</i>) time. Because it does not have the root bottleneck that limits all tree-based data structures to <i>O</i>(1) concurrency, the balanced cube achieves <i>O</i>(<i>N</i>/log <i>N</i>) concurrency.</p>
<p>Considering graphs as concurrent data structures, graph algorithms are presented for the shortest path problem, the max-flow problem, and graph partitioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms.</p>
<p>A message-passing, concurrent architecture is developed that exploits the characteristics of VLSI technology to support concurrent data structures. Interconnection topologies are compared on the basis of dimension. It is shown that minimum latency is achieved with a very low dimensional network. A deadlock-free routing strategy is developed for this class of networks, and a prototype VLSI chip implementing this strategy is described. A message-driven processor complements the network by responding to messages with a very low latency. The processor directly executes messages, eliminating a level of interpretation. To take advantage of the performance offered by specialization while at the same time retaining flexibility, processing elements can be specialized to operate on a single class of objects. These <i>object experts</i> accelerate the performance of all applications using this class.</p>https://thesis.library.caltech.edu/id/eprint/1122Images, Numerical Analysis of Singularities and Shock Filters
https://resolver.caltech.edu/CaltechETD:etd-06192006-090538
Authors: {'items': [{'id': 'Rudin-Leonid-Iakov', 'name': {'family': 'Rudin', 'given': 'Leonid Iakov'}, 'show_email': 'NO'}]}
Year: 1987
DOI: 10.7907/5hr8-8412
<p>This work is concerned primarily with establishing a natural mathematical framework for the Numerical Analysis of Singularities, a term which we coined for this new evolving branch of <i>numerical</i> analysis.</p>
<p>The problem of analyzing singular behavior of nonsmooth functions is implicitly or explicitly ingrained in any successful attempt to extract information from images. The abundance of papers on the so called Edge Detection testifies to this statement.</p>
<p>We attempt to make a fresh start by reformulating this old problem in the rigorous context of the Theory of Generalized Functions of several variables with stress put on the computational aspects of essential singularities. We state and prove a variant of the Divergence Theorem for discontinuous functions which we call Fundamental Theorem of Edge Detection, for it is the backbone of the advocated here numerical analysis based on the estimates of contributions furnished by the essential singularities of functions.</p>
<p>We further extend this analysis to arbitrary order singularities by utilization of the Miranda's calculus of tangential derivatives. With this machinery we are able to explore computationally the internal geometry of singularities including singular, i.e., nonsmooth, singularity boundaries. This theory gives rise to singularity detection scheme called "rotating thin masks" which is applicable to arbitrary order n-dimensional essential singularities. In the particular implementation we combined first-order detector with derived here various curvature detectors. Preliminary experimental results are presented. We also derive a new class of nonlinear singularity detection schemes based on tensor products of distributions.</p>
<p>Finally, a novel computational approach to the problem of image enhancement is presented. We call this construction the Shock Filters, since it is founded on the nonlinear PDE's whose solutions exhibit formation of discontinuous profiles, corresponding to shock waves in gas dynamics. An algorithm for experimental Shock Filter, based on the upwind finite difference scheme is presented and tested on the one and two dimensional data.</p>https://thesis.library.caltech.edu/id/eprint/2646Logic from Programming Language Semantics
https://resolver.caltech.edu/CaltechETD:etd-02282008-111427
Authors: {'items': [{'id': 'Choo-Young-il', 'name': {'family': 'Choo', 'given': 'Young-il'}, 'show_email': 'NO'}]}
Year: 1987
DOI: 10.7907/r9hf-1b88
<p>Logic for reasoning about programs must proceed from the programming language semantics. It is our thesis that programs be considered as mathematical objects that can be reasoned about directly, rather than as linguistic expressions whose meanings are embedded in an intermediate formalism.</p>
<p>Since the semantics of many programming language features (including recursion, type-free application, infinite structures, self-reference, and reflection) require models that are constructed as limits of partial objects, a logic for dealing with partial objects is required.</p>
<p>Using the <i>D<sub>∞</sub></i> model of the λ-calculus, a logic (called <i>continuous logic</i>) for reasoning about partial objects is presented. In continuous logic, the logical operations (negation, implication, and quantification) are defined for each of the finite levels and then extended to the limit, giving us a model of type-free logic.</p>
<p>The triples of Hoare Logic are interpreted as partial assertions over the domain of partial states, and contradictions arising from rules for function definitions are analyzed. Recursive procedures and recursive functions are both proved using mathematical induction.</p>
<p>A domain of infinite lists is constructed as a model for languages with lazy evaluation and it is compared to an ordinal-heirarchic construction. A model of objects and multiple inheritance is constructed where objects are self-referential states and multiple inheritance is defined using the notion of product of classes. The reflective processor for a language with environment and continuation reflection is constructed as the projective limit of partial reflective processors of finite height.</p>https://thesis.library.caltech.edu/id/eprint/811Constraint Methods for Neural Networks and Computer Graphics
https://resolver.caltech.edu/CaltechETD:etd-02122007-152609
Authors: {'items': [{'id': 'Platt-John-Carlton', 'name': {'family': 'Platt', 'given': 'John Carlton'}, 'show_email': 'NO'}]}
Year: 1989
DOI: 10.7907/vnt8-kh55
<p>Both computer graphics and neural networks are related, in that they model natural phenomena. Physically-based models are used by computer graphics researchers to create realistic, natural animation, and neural models are used by neural network researchers to create new algorithms or new circuits. To exploit successfully these graphical and neural models, engineers want models that fulfill designer-specified goals. These goals are converted into mathematical constraints.</p>
<p>This thesis presents constraint methods for computer graphics and neural networks. The mathematical constraint methods modify the differential equations that govern the neural or physically based models. The constraint methods gradually enforce the constraints exactly. This thesis also describes applications of constrained models to real problems.</p>
<p>The first half of this thesis discusses constrained neural networks. The desired models and goals are often converted into constrained optimization problems. These optimization problems are solved using first-order differential equations. There are a series of constraint methods which are applicable to optimization using differential equations: the <i>Penalty Method</i> adds extra terms to the optimization function which penalize violations of constraints, the <i>Differential Multiplier Method</i> adds subsidiary differential equations which estimate Lagrange multipliers to fulfill the constraints gradually and exactly, <i>Rate-Controlled Constraints</i> compute extra terms for the differential equation that force the system to fulfill the constraints exponentially. The applications of constrained neural networks include the creation of constrained circuits, error-correcting codes, symmetric edge detection for computer vision, and heuristics for the traveling salesman problem.</p>
<p>The second half of this thesis discusses constrained computer graphics models. In computer graphics, the desired models and goals become constrained mechanical systems, which are typically simulated with second-order differential equations. The <i>Penalty Method</i> adds springs to the mechanical system to penalize violations of the constraints. <i>Rate-Controlled Constraints</i> add forces and impulses to the mechanical system to fulfill the constraints with critically damped motion. Constrained computer graphics models can be used to make deformable physically-based models follow the directives of a animator.</p>https://thesis.library.caltech.edu/id/eprint/617Applications of Surface Networks to Sampling Problems in Computer Graphics
https://resolver.caltech.edu/CaltechETD:etd-03132007-083552
Authors: {'items': [{'email': 'BrianVon@fpga.com', 'id': 'Von-Herzen-Brian', 'name': {'family': 'Von Herzen', 'given': 'Brian'}, 'show_email': 'YES'}]}
Year: 1989
DOI: 10.7907/K9AS-GN82
<p>This thesis develops the theory, algorithms and data structures for adaptive sampling of parametric functions, which can represent the shapes and motions of physical objects. For the first time, ensured methods are derived for determining collisions and other interactions for a broad class of parametric functions. A new data structure, called a <i>surface network</i>, is developed for the collision algorithm and for other sampling problems in computer graphics. A surface network organizes a set of parametric samples into a hierarchy. Surface networks are shown to be good for rendering images, for approximating surfaces, and for modeling physical environments. The basic notion of a surface network is generalized to higher-dimensional problems such as collision detection. We may think of a two-dimensional network covering a three-dimensional solid, or an <i>n</i>-dimensional network embedded in a higher-dimensional space. Surface networks are applied to the problems of adaptive sampling of static parametric surfaces, to adaptive sampling of time-dependent parametric surfaces, and to a variety of applications in computer graphics, robotics, and aviation.</p>
<p>First we develop the theory for adaptive sampling of static surfaces. We explore bounding volumes that enclose static surfaces, subdivision mechanisms that adjust the sampling density, and subdivision criteria that determine where samples should be placed.</p>
<p>A new method is developed for creating bounding ellipsoids of parametric surfaces using a Lipschitz condition to place bounds on the derivatives of parametric functions. The bounding volumes are arranged in a hierarchy based on the hierarchy of the surface network. The method ensures that the bounding volume hierarchy contains the parametric surface completely. The bounding volumes are useful for computing surface intersections. They are potentially useful for ray tracing of parametric surfaces.</p>
<p>We develop and examine a variety of subdivision mechanisms to control the sampling process for parametric functions. Some of the methods are shown to improve the robustness of adaptive sampling. Algorithms for one mechanism, using bintrees of right parametric triangles, are particularly simple and robust.</p>
<p>A set of empirical subdivision criteria determine where to sample a surface, when we have no additional information about the surface. Parametric samples are concentrated in regions of high curvature, and along intersection boundaries.</p>
<p>Once the foundations of adaptive sampling for static surfaces are described, we examine time-dependent surfaces. Based on results with the empirical subdivision criteria for static surfaces, we derive ensured criteria for collision determination. We develop a new set of rectangular bounding volumes, apply a standard <i>k</i>-dimensional subdivision mechanism called k-d trees, and develop criteria for ensuring that we detect collisions between parametric surfaces.</p>
<p>We produce rectangular bounding boxes using a "Jacobian"-style matrix of Lipschitz conditions on the parametric function. The rectangular method produces even tighter bounds on the surface than the ellipsoidal method, and is effective for computing collisions between parametric surfaces.</p>
<p>A new collision determination technique is developed that can detect collisions of parametric functions, based on surface network hierarchies. The technique guarantees that the first collision is found, to within the temporal accuracy of the computation, for surfaces with bounded parametric derivatives. Alternatively, it is possible to guarantee that no collisions occur for the same class of surfaces. When a collision is found, the technique reports the location and parameters of the collision as well as the time of first collision.</p>
<p>Finally, we examine several applications of the sampling methods. Surface networks are applied to the problem of converting a two-dimensional image, or texture map, into a set of triangles that tile the plane. Many polygon-rendering systems do not provide the capability of rendering surfaces with textures. The technique converts textures to triangles that can be rendered directly by a polygon system. In addition, potential applications of the collision determination techniques are discussed, including robotics and air-traffic control problems.</p>https://thesis.library.caltech.edu/id/eprint/943Generative Modeling: An Approach to High Level Shape Design for Computer Graphics and CAD
https://resolver.caltech.edu/CaltechETD:etd-07122007-144802
Authors: {'items': [{'email': 'johnsny@microsoft.com', 'id': 'Snyder-John-Michael', 'name': {'family': 'Snyder', 'given': 'John Michael'}, 'show_email': 'NO'}]}
Year: 1991
DOI: 10.7907/HRFJ-QC74
<p>Generative modeling is an approach to computer-assisted geometric modeling. The goal of the approach is to allow convenient and high-level specification of shapes, and provide tools for rendering and analysis of the specified shapes. Shapes include curves, surfaces, and solids in 3D space, as well as higher-dimensional entities such as surfaces deforming in time, and solids with a spatially varying mass density.</p>
<p>Shape specification in the approach involves combining low-dimensional entities, especially 2D curves, into higher-dimensional shapes. This combination is specified through a powerful shape description language which builds multidimensional parametric functions. The language is based on a set of primitive operators on parametric functions which include arithmetic operators, vector and matrix operators, integration and differentiation, constraint solution and global optimization. Although each-primitive operator is fairly simple, high-level shapes and shape building operators can be defined using recursive combination of the primitive operators.</p>
<p>The approach encourages the modeler to build parameterized families of shapes rather than single instances. Shapes can be parameterized by scalar parameters (e.g., time or joint angle) or higher-dimensional parameters (e.g., a curve controlling how the scale of a cross section varies as it is translated). Such parameterized shapes allow easy modification of the design, since the modeler can interact with parameters that relate to high-level properties of the shape. In contrast, many geometric modeling systems use a much lower-level specification, such as through sets of many 3D control points.</p>
<p>Tools for rendering and analysis of generative models are developed using the concept of interval analysis. Each primitive operator on parametric functions has an inclusion function method, which produces an interval bound on the range of the function, given an interval bound on its domain. With these inclusion functions, robust algorithms exist for computing solutions to nonlinear systems of constraints and global minimization problems, when these problems are expressed in the modeling language. These algorithms, in turn, are developed into robust approximation techniques to compute intersections, CSG operations, and offset operations.</p>https://thesis.library.caltech.edu/id/eprint/2865Compiler Optimization of Data Storage
https://resolver.caltech.edu/CaltechETD:etd-06272007-081805
Authors: {'items': [{'id': 'Gupta-Rajiv', 'name': {'family': 'Gupta', 'given': 'Rajiv'}, 'show_email': 'NO'}]}
Year: 1991
DOI: 10.7907/E8DD-VG68
<p>The system efficiency and throughput of most architectures are critically dependent on the ability of the memory subsystem to satisfy data operand accesses. This ability is in turn dependent on the distribution or layout of the data relative to the access of the data by the executing code. Page faults, cache misses, truncated vectors, global communication, for example, are expensive but common symptoms of data and access misalignment.</p>
<p>Compiler optimization, traditionally synonymous with code optimization, has addressed the issue of efficient data access by manipulating the code to better access the data under a fixed, default distribution. This approach is restrictive, and often suboptimal. Data optimization, or data-layout optimization, is presented as an integral part of compiler optimization.</p>
<p>For scalar data, a good compile-time approximation of the "reference string," or sequence of data accesses, is advanced for the purpose of distributing the data. However, the optimal distribution of the scalar data for such, or any, reference string is proved NP-complete. A methodology and a polynomial algorithm for an approximate solution are developed. Experiments with representative, but scaled, scientific programs and execution environments display a reduction in cache misses up to two orders in magnitude.</p>
<p>For array data, compile-time predictions of the patterns in which the data is accessed by programs in scalar and array languages are examined. For arbitrary computations in an array language, the determination of the optimal layout of the data is proved to be NP-complete. Polynomial techniques for the approximate solutions to the optimal layout of arrays in both languages, scalar and array, are outlined.</p>
<p>The general applicability of the techniques, in terms of environments other than hierarchical memories, and in terms of interdependence with code manipulations, is discussed. New code optimizations inspired by the data distribution techniques are motivated. The prudence of compiler- over user-optimized data distribution is argued.</p>https://thesis.library.caltech.edu/id/eprint/2740From geometry to texture : experiments towards realism in computer graphics
https://resolver.caltech.edu/CaltechETD:etd-08062007-110815
Authors: {'items': [{'email': 'timkay@not.com', 'id': 'Kay-T-L', 'name': {'family': 'Kay', 'given': 'Timothy L.'}, 'show_email': 'YES'}]}
Year: 1992
DOI: 10.7907/XCAM-R775
This thesis presents a new computer graphics texture element called a texel as well as an associated rendering algorithm, which together produce an appearance never before achieved in computer graphics. Unlike previous modeling primitives, which are limited to solid, crisp appearances (e.g., metal, plastics, and glass), texels have a soft, fuzzy appearance, and thus can be used to create models and images of soft objects.
This thesis presents a solution to the problem of creating fur. As an example, a Teddy bear is modeled and rendered. As part of the process, a new BDRF is developed for texels which can produce back lighting effects. A model deformation technique using trilinear solids is developed.
This thesis then addresses a more complex example, that of creating a microscopic swatch of cloth by computationally "weaving" threads. The process of converting the resulting geometric model into texels is presented. The swatch of cloth is then replicated to cover the infinite plane seamlessly.
A new phenomenon, the texture threshold effect is presented. It is the point at which geometry turns into texture. When viewed from beyond a certain distance threshold, the appearance of a microscopic model will converge to a macroscopic model. The position of the texture threshold is calculated. The infinite cloth model is then rendered from beyond the texture threshold, and its cloth BDRF is extracted computationally. This BDRF is then used to render a cloth-covered car seat.
The BDRF extraction process involves sampling an image which contains spectral energy above the Nyquist limit. Hence, the use of point sampling in computer graphics is analyzed to verify that aliasing energy is controlled. The process of jittered subsampling is analyzed, correcting and completing previous attempts. The results confirm that it is possible to render complex computer graphics imagery avoiding artifacts from aliased energy.
https://thesis.library.caltech.edu/id/eprint/3028A structured approach to physically-based modeling for computer graphics
https://resolver.caltech.edu/CaltechTHESIS:09282011-075406850
Authors: {'items': [{'id': 'Barzel-R', 'name': {'family': 'Barzel', 'given': 'Ronen'}, 'show_email': 'NO'}]}
Year: 1992
DOI: 10.7907/tbgd-g285
<p>This thesis presents a framework for the design of physically-based computer graphics models. The framework includes a paradigm for the structure of physically-based models, techniques for "structured" mathematical modeling, and a specification of a computer program structure in which to implement the models. The framework is based on known principles and methodologies of structured programming and mathematical modeling. Because the framework emphasizes the structure and organization of models, we refer to it as "Structured Modeling."</p>
<p>The Structured Modeling framework focuses on clarity and "correctness" of models, emphasizing explicit statement of assumptions, goals, and techniques. In particular, we partition physically-based models, separating them into conceptual and mathematical models, and posed problems. We control complexity of models by designing in a modular manner, piecing models together from smaller components.</p>
<p>The framework places a particular emphasis on defining a complete formal statement of a model's mathematical equations, before attempting to simulate the model. To manage the complexity of these equations, we define a collection of mathematical constructs, notation, and terminology, that allow mathematical models to be created in a structured and modular manner.</p>
<p>We construct a computer programming environment that directly supports the implementation of models designed using the above techniques. The environment is geared to a tool-oriented approach, in which models are built from an extensible collection of software objects, that correspond to elements and tasks of a "blackboard" design of models.</p>
<p>A substantial portion of this thesis is devoted to developing a library of physically-based model "modules," including rigid-body kinematics, rigid-body dynamics, and dynamic constraints, all built with the Structured Modeling framework. These modules are intended to serve both as examples of the framework, and as potentially useful tools for the computer graphics community. Each module includes statements of goals and assumptions, explicit mathematical models and problem statements, and descriptions of software objects that support them. We illustrate the use of the library to build some sample models, and include discussion of various possible additions and extensions to the library.</p>
<p>Structured Modeling is an experiment in modeling: an exploration of designing via strict adherence to a dogma of structure, modularity, and mathematical formality. It does not stress issues such as particular numerical simulation techniques or efficiency of computer execution time or memory usage, all of which are important practical considerations in modeling. However, at least so far as the work carried on in this thesis, Structured Modeling has proven to be a useful aid in the design and understanding of complex physically based models.</p>https://thesis.library.caltech.edu/id/eprint/6691Sequence specific effects on the incorporation of dideoxynucleotides by a modified T7 polymerase
https://resolver.caltech.edu/CaltechTHESIS:11122012-105407501
Authors: {'items': [{'id': 'Blanchard-A-P', 'name': {'family': 'Blanchard', 'given': 'Alan-Philippe'}, 'show_email': 'NO'}]}
Year: 1993
DOI: 10.7907/9vz4-0e68
<p>While incorporating nucleotides onto the end of a DNA molecule, DNA polymerases selectively discriminate against dideoxynucleotides in favor of incorporating deoxynucleotides. The magnitude of this discrimination is modulated by the template DNA sequence near the incorporation site. This effect has been characterized by analyzing the raw data from a large number of DNA sequencing experiments. It is shown that, for bacteriophage T7 polymerase, the 5 contiguous bases extending from 3 bases 3' (on the template strand) from the incorporation site to 1 base 5' of the incorporation site are the most important in modulating dideoxynucelotide discrimination. A table of discrimination ratios for 1007 different 5-mer contexts is presented.</p>https://thesis.library.caltech.edu/id/eprint/7265A multiple-mechanism developmental model for defining self-organizing geometric structures
https://resolver.caltech.edu/CaltechETD:etd-10022007-150221
Authors: {'items': [{'id': 'Fleischer-K-W', 'name': {'family': 'Fleischer', 'given': 'Kurt W.'}, 'show_email': 'NO'}]}
Year: 1995
DOI: 10.7907/sz7n-ad32
This thesis introduces a model of multicellular development. The model combines elements of the chemical, cell lineage, and mechanical models of morphogenesis pioneered by Turing, Lindenmayer, and Odell, respectively. The internal state of each cell in the model is represented by a time-varying state vector that is updated by a differential equation. The differential equation is formulated as a sum of contributions from different sources, describing gene transcription, kinetics, and cell metabolism. Each term in the differential equation is multiplied by a conditional expression that models regulatory processes specific to the process described by that term.
The resulting model has a broader range of fundamental mechanisms than other developmental models. Since gene transcription is included, the model can represent the genetic orchestration of a developmental process involving multiple mechanisms.
We show that a computational implementation of the model represents a wide range of biologically relevant phenomena in two and three dimensions. This is illustrated by a diverse collection of simulation experiments exhibiting phenomena such as lateral inhibition, differentiation, segment formation, size regulation, and regeneration of damaged structures.
We have explored several application areas with the model:
Synthetic biology. We advocate the use of mathematical modeling and simulation for generating intuitions about complex biological systems, in addition to the usual application of mathematical biology to perform analysis on a simplified model. The breadth of our model makes it useful as a tool for exploring biological questions about pattern formation and morphogenesis. We show that simulated experiments to address a particular question can be done quickly and can generate useful biological intuitions. As an example, we document a simulation experiment exploring inhibition via surface chemicals. This experiment suggests that the final pattern depends strongly on the temporal sequence of events. This intuition was obtained quickly using the simulator as an aid to understanding the general behavior of the developmental system.
Artificial evolution of neural networks. Neural networks can be represented using a developmental model. We investigate the use of artificial evolution to select equations and parameters that cause the model to create desired structures. We compare our approach to other work in evolutionary neural networks, and discuss the difficulties involved.
Computer graphics modeling. We extend the model to allow cells to sense the presence of a 3D surface model, and then use the multicellular simulator to grow cells on the surface. This database amplification technique enables the creation of cellular textures to represent detailed geometry on a surface (e.g., scales, feathers, thorns).
In the process of writing many developmental programs, we have gained some experience in the construction of self-organizing cellular structures. We identify some critical issues (size regulation and scalability), and suggest biologically-plausible strategies for addressing them.
https://thesis.library.caltech.edu/id/eprint/3871