Advisor Feed
https://feeds.library.caltech.edu/people/Kajiya-J-T/advisor.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenThu, 30 Nov 2023 19:31:26 +0000A Versatile Ethernet Interface
https://resolver.caltech.edu/CaltechTHESIS:04122012-112531771
Authors: Whelan, Daniel Steven
Year: 1981
DOI: 10.7907/x4t9-5n88
No abstract.https://thesis.library.caltech.edu/id/eprint/6918REST: A Leaf Cell Design System
https://resolver.caltech.edu/CaltechTHESIS:04122012-162654185
Authors: Mosteller, R. C.
Year: 1981
DOI: 10.7907/1r9d-ad60
This thesis describes a leaf cell design system, REST, Richard's Editor for Sticks. REST is intended to be used for the preparation of the lowest level cells in an integrated circuit design. A stick notation is used in the editing process. Given a structured design methodology any design task can be separated into two parts: 1) leaf cell design and 2) composition cell design. This tool addresses the first of these tasks, although it may also be used for general manipulation of stick diagrams. A table driven compaction algorithm is presented. This graph based algorithm uses a weighted affinity factor to reduce total polysilicon and diffusion wire length. A suite of utilities provide functions such as file interface, physical mapping, annotation, etc. consistent with a set of design rules. The system has been implemented in Simula on a DEC 20 computer, and works in conjunction with a limited functional diagramming system. The design rules, models and stick interpretation are table driven and can be changed for various technologies. Currently REST is being used for NMOS technology. A community of users have used the REST system to prepare a number of designs resulting in a substantial reduction of design time. In addition, the system is currently being used at a major computer manufacturer in conjunction with a VLSI design course.https://thesis.library.caltech.edu/id/eprint/6927Hierarchical Nets: A Structured Petri Net Approach to Concurrency
https://resolver.caltech.edu/CaltechTHESIS:04022012-150759898
Authors: Choo, Young-il
Year: 1982
DOI: 10.7907/t5w4-vt07
<p>Liveness and safeness are two key properties Petri nets should have when they are used to model asynchronous systems. The analysis of liveness and safeness for general Petri nets, though shown to be decidable by Mayr [1981], is still computationally expensive (Lipton [1976]). In this paper an hierarchical approach is taken: a class of Petri nets is recursively defined starting with simple, live and safe structures, becoming progressively more complex using net transformations designed to preserve liveness and safeness.</p>
<p>Using simple net transformations, nice nets, which are live and safe, are defined. Their behavior is too restrictive for modeling non-trivial systems, so the mutual exclusion and the repetition constructs are added to get µ-ρ-nets. Since the use of mutual exclusions can cause deadlock, and the use of repetitions can cause loss of safeness, restrictions for their use are given. Using µ-ρ-nets as the building blocks, hierarchical nets are defined. When the mutual exclusion and repetition constructs are allowed between hierarchical nets, distributed hierarchical nets are obtained. Examples of distributed hierarchical nets used to solve synchronization problems
are given.</p>
<p>General net transformations not preserving liveness or safeness, and a notion of duality are presented, and their effect on Petri net behavior is considered.</p>https://thesis.library.caltech.edu/id/eprint/6888Automated Performance Optimization of Custom Integrated Circuits
https://resolver.caltech.edu/CaltechETD:etd-11072005-081513
Authors: Trimberger, Stephen Mathias
Year: 1983
DOI: 10.7907/8YDZ-G637
<p>The complexity of integrated circuits requires a hierarchical design methodology that allows the user to divide the problem into pieces, design each piece independently, and assemble the pieces into the complete system. The design hierarchy brings out composition problems, problems that are a property of the assembly as a whole, not of one single instance in the hierarchy.</p>
<p>Recent research has produced tools that automate part of the composition task - the logical connection of the pieces. However, these tools do not ensure that signals driven over these connections will be driven sufficiently to give reasonable cycle speed of the resulting chips. It is easily possible to specify an assembly in which a small-sized gate is required to drive an enormous load. Parasitic capacitance of the wiring made automatically by the logical connection tool can be the dominant source of delay, so assembly tools can actually worsen the performance of the circuit and hide this fact from the designer.</p>
<p>When required to make large circuits, automated layout tools such as PLA generators can blindly make layouts that give abysmally poor performance. Here again, the delay is in a part of circuit that the designer did not specify, so it is hidden. Finding and correcting these problems is a difficult and time-consuming task in integrated circuit design, and one that consumes vastly more people's time and computer time than the simple assembly of the chip.</p>
<p>The task of guaranteeing that circuits meet performance specifications has been left mainly to the designer. Computer aided design has provided analysis tools, tools that tell the designer the performance statistics of the current design. It is then the designer's burden to interpret the performance statistics and use them as guides to make changes in the circuit.</p>
<p>This thesis views performance optimization as an electrical composition task. Poor performance as a result of mismatched loads on devices is a problem of composition and should be corrected by the composition tool. Such a tool is presented in this thesis -- a program that automatically sizes transistors in a symbolic description of a chip to match the load the transistors are driving. The results are encouraging: they show that delays can be cut by a factor of two in many current designs.</p>https://thesis.library.caltech.edu/id/eprint/4438Parallel Machines for Computer Graphics
https://resolver.caltech.edu/CaltechETD:etd-11092005-140159
Authors: Ullner, Michael K.
Year: 1983
DOI: 10.7907/wxmq-sx43
<p>Computer graphics provides some ideal applications for the kind of highly parallel implementations made possible by advances in integrated circuit technology. Specifically, hidden line and hidden surface algorithms, while easily defined and simple in concept, entail a substantial amount of computation. This requirement fits the characteristics of integrated circuit technology, where modular designs involving regular communication between many concurrent operations are rewarded with high performance at an acceptable cost.</p>
<p>Ray tracing is a very flexible technique that can be used to produce some of the most realistic of all computer generated images by simulating the interactions of light rays with surfaces in a modeled scene. Because light rays are mutually independent, many may be processed simultaneously, and the potential for concurrency is great. One architecture for expediting a ray tracing algorithm consists of a conventional computer equipped with a special purpose peripheral device for locating the intersections of rays and surfaces. This intersection computation is the most time consuming aspect of a ray tracing algorithm. Although the attached processor configuration can produce images more quickly than an unaided computer, its performance is limited. Alternatively, a pipeline of surface processors can replace the peripheral device. Each processor computes the intersections of its stored surface with rays that flow through the pipe. Such a machine machine can be quite fast, and its performance can be increased by lengthening the pipeline, but the component processors are not very effectively utilized. A third approach combines the advantages of the prior two machines by using an array of processors, each simulating a distinct subvolume of the modeled world by treating light rays traveling through space as messages flowing between processors. Local communication is sufficient because light rays travel continuously through space.</p>
<p>In real time computer graphics, successive images must be produced in times that are imperceptible to a viewer. Although the ray tracing machines fall short of this performance, it is possible to compromise image quality in order to produce a highly parallel machine capable of real time operation. The processors in such a machine are organized to form a binary tree. Leaf processors scan-convert surfaces, producing a sequence of segments, where a segment is the portion of a surface that appears on a single scan line of the display. Processors towards the root of the tree accept two such segment sequences and produce a third in which all segment overlap has been resolved. The final image is available at the root of the tree. The communication bottleneck that would otherwise occur at the root can be eliminated by breaking out parallel roots, and the resulting tree may be extended to scenes of almost arbitrary complexity merely by increasing the supply of available processors.</p>
<p>Massive parallelism can also be applied to the problem of removing hidden edges from line drawings. A suitable architecture takes the form of a pipeline in which each processor is dedicated to the handling of a single polygon edge. These processors successively clip line segments passing through the pipeline to eliminate portions hidden behind surfaces. Each edge processor can be constructed out of little more than three serial multipliers.</p>
<p>The machines described here are varied in organization, and each functions differently, but their treatment of sorting is one ingredient common to all. Sorting is a key component of hidden surface algorithms running on conventional computers, but its extensive communication requirements make it costly for use in a highly integrated design. Consequently, the highly parallel machines described here operate largely without sorting. Instead, they maintain information in sorted order or make use of already sorted information to limit communication requirements.</p>https://thesis.library.caltech.edu/id/eprint/4471ANIMAC: A Multiprocessor Architecture for Real-Time Computer Animation
https://resolver.caltech.edu/CaltechETD:etd-03262008-092532
Authors: Whelan, Daniel Steven
Year: 1985
DOI: 10.7907/0qnw-g372
<p>Advances in integrated circuit technology have been largely responsible for the growth of the computer graphics industry. This technology promises additional growth through the remainder of the century. This dissertation addresses how this future technology can be harnessed and used to construct very high performance real-time computer graphics systems.</p>
<p>This thesis proposes a new architecture for real-time animation engines. The ANIMAC architecture achieves high performance by utilizing a two-dimensional array of processors that determine visible surfaces in parallel. An array of sixteen processors with only nearest neighbor interprocessor communications can produce real-time shadowed images of scenes containing 100, 000 triangles.</p>
<p>The ANIMAC architecture is based upon analysis and simulations of various parallelization techniques. These simulations suggest that the viewing space be spatially subdivided and that each processor produce a visible surface image for several viewing space subvolumes. Simple assignments of viewing space subvolumes to processors are shown to offer high parallel efficiencies.</p>
<p>Simulations of parallel algorithms were driven with data derived from real scenes since analysis of scene composition suggested that using simplistic models of scene composition might lead to incorrect results.</p>
<p>The ANIMAC architecture required the development of a shadowing algorithm which was tailored to its parallel environment. This algorithm separates shadowing into local and foreign effects. Its implementation allows individual processors to compute shadowing effects for their image regions utilizing only very local information.</p>
<p>The design of the ANIMAC processors makes extensive use of new VLSI architectures. A formerly proposed processor per object architecture is used to determine visible surfaces while new processor per object and processor per pixel architectures are used to determine shadowing effects.</p>
<p>It is estimated that the ANIMAC architecture can be realized in the early 1990's. Realizing this architecture will require considerable amounts of hardware and capital yet its cost will not be out of line when compared with today's real-time computer graphics systems.</p>https://thesis.library.caltech.edu/id/eprint/1155Combining Computation with Geometry
https://resolver.caltech.edu/CaltechETD:etd-04102008-142130
Authors: Lien, Sheue-Ling Chang
Year: 1985
DOI: 10.7907/n1qe-h846
<p>This thesis seeks to establish mathematical principles and to provide efficient solutions to various time consuming operations in computer-aided geometric design. It contains a discussion of three major topics: (1) design validation by means of object interference detection, (2) object reconstruction through the union, intersection, and subtraction of two polyhedra, and (3) calculation of basic engineering properties such as volume, center of mass, or moments of inertia.</p>
<p>Two criteria are presented for solving the problems of point-polygon enclosure and point-polyhedron enclosure in object interference detection. An algorithm for efficient point-polyhedron-enclosure detection is presented. Singularities encountered in point-polyhedron-enclosure detection are categorized and simple methods for resolving them are also included.</p>
<p>A new scheme for representing solid objects, called skeletal polyhedron representation, is proposed. Also included are algorithms for performing set operations on polyhedra (or polygons) represented in skeletal polyhedron representation, algorithms for performing edge-edge intersection and face-face intersection in a set operation, and a perturbation method which can be used to resolve singularities for an easy execution of edge-edge intersection and face-face intersection.</p>
<p>A symbolic method for calculating basic engineering properties (such as volume, center of mass, moments of inertia, and similar integral properties of geometrically complex solids) is proposed. The same method is generalized for computing the integral properties of a set combined polyhedron, and for computing the integral properties of an arbitrary polyhedron in m-dimensional (R<sup>m</sup>) space.</p>https://thesis.library.caltech.edu/id/eprint/1333Monte Carlo Methods for 2-D Compaction
https://resolver.caltech.edu/CaltechETD:etd-03202008-091615
Authors: Mosteller, Richard Craig
Year: 1986
DOI: 10.7907/mwrq-t026
<p>A new method of compaction for VLSI circuits is presented. Compaction is done simultaneously in two dimensions and uses a Monte Carlo simulation method often referred to as simulated annealing for optimization. A new curvilinear representation for VLSI circuits, specifically chosen to make the compaction efficient, is developed. Experiments with numerous cells are presented that demonstrate this method to be as good as, or better than the hand compaction previously applied to these cells. Hand compaction was the best previously known method of compaction. An experimental evaluation is presented of how the run time complexity grows as the number, <i>N</i>, of objects in the circuit increases. The results of this evaluation indicates that the run time growth is order <i>O</i>(<i>N</i> log(<i>A</i>))<i>f</i>(<i>d</i>) where <i>f</i>(<i>d</i>) is a function of the density, <i>d</i>, and <i>A</i> is the initial cell area. The function <i>f</i>(<i>d</i>) appears to have negligible or no dependence on <i>N</i>. A hierarchical composition approach is developed which takes advantage of the capability of the curvilinear representation and the 2-dimensional compaction technique.</p>https://thesis.library.caltech.edu/id/eprint/1035Logic from Programming Language Semantics
https://resolver.caltech.edu/CaltechETD:etd-02282008-111427
Authors: Choo, Young-il
Year: 1987
DOI: 10.7907/r9hf-1b88
<p>Logic for reasoning about programs must proceed from the programming language semantics. It is our thesis that programs be considered as mathematical objects that can be reasoned about directly, rather than as linguistic expressions whose meanings are embedded in an intermediate formalism.</p>
<p>Since the semantics of many programming language features (including recursion, type-free application, infinite structures, self-reference, and reflection) require models that are constructed as limits of partial objects, a logic for dealing with partial objects is required.</p>
<p>Using the <i>D<sub>∞</sub></i> model of the λ-calculus, a logic (called <i>continuous logic</i>) for reasoning about partial objects is presented. In continuous logic, the logical operations (negation, implication, and quantification) are defined for each of the finite levels and then extended to the limit, giving us a model of type-free logic.</p>
<p>The triples of Hoare Logic are interpreted as partial assertions over the domain of partial states, and contradictions arising from rules for function definitions are analyzed. Recursive procedures and recursive functions are both proved using mathematical induction.</p>
<p>A domain of infinite lists is constructed as a model for languages with lazy evaluation and it is compared to an ordinal-heirarchic construction. A model of objects and multiple inheritance is constructed where objects are self-referential states and multiple inheritance is defined using the notion of product of classes. The reflective processor for a language with environment and continuation reflection is constructed as the projective limit of partial reflective processors of finite height.</p>https://thesis.library.caltech.edu/id/eprint/811Images, Numerical Analysis of Singularities and Shock Filters
https://resolver.caltech.edu/CaltechETD:etd-06192006-090538
Authors: Rudin, Leonid Iakov
Year: 1987
DOI: 10.7907/5hr8-8412
<p>This work is concerned primarily with establishing a natural mathematical framework for the Numerical Analysis of Singularities, a term which we coined for this new evolving branch of <i>numerical</i> analysis.</p>
<p>The problem of analyzing singular behavior of nonsmooth functions is implicitly or explicitly ingrained in any successful attempt to extract information from images. The abundance of papers on the so called Edge Detection testifies to this statement.</p>
<p>We attempt to make a fresh start by reformulating this old problem in the rigorous context of the Theory of Generalized Functions of several variables with stress put on the computational aspects of essential singularities. We state and prove a variant of the Divergence Theorem for discontinuous functions which we call Fundamental Theorem of Edge Detection, for it is the backbone of the advocated here numerical analysis based on the estimates of contributions furnished by the essential singularities of functions.</p>
<p>We further extend this analysis to arbitrary order singularities by utilization of the Miranda's calculus of tangential derivatives. With this machinery we are able to explore computationally the internal geometry of singularities including singular, i.e., nonsmooth, singularity boundaries. This theory gives rise to singularity detection scheme called "rotating thin masks" which is applicable to arbitrary order n-dimensional essential singularities. In the particular implementation we combined first-order detector with derived here various curvature detectors. Preliminary experimental results are presented. We also derive a new class of nonlinear singularity detection schemes based on tensor products of distributions.</p>
<p>Finally, a novel computational approach to the problem of image enhancement is presented. We call this construction the Shock Filters, since it is founded on the nonlinear PDE's whose solutions exhibit formation of discontinuous profiles, corresponding to shock waves in gas dynamics. An algorithm for experimental Shock Filter, based on the upwind finite difference scheme is presented and tested on the one and two dimensional data.</p>https://thesis.library.caltech.edu/id/eprint/2646Compiler Optimization of Data Storage
https://resolver.caltech.edu/CaltechETD:etd-06272007-081805
Authors: Gupta, Rajiv
Year: 1991
DOI: 10.7907/E8DD-VG68
<p>The system efficiency and throughput of most architectures are critically dependent on the ability of the memory subsystem to satisfy data operand accesses. This ability is in turn dependent on the distribution or layout of the data relative to the access of the data by the executing code. Page faults, cache misses, truncated vectors, global communication, for example, are expensive but common symptoms of data and access misalignment.</p>
<p>Compiler optimization, traditionally synonymous with code optimization, has addressed the issue of efficient data access by manipulating the code to better access the data under a fixed, default distribution. This approach is restrictive, and often suboptimal. Data optimization, or data-layout optimization, is presented as an integral part of compiler optimization.</p>
<p>For scalar data, a good compile-time approximation of the "reference string," or sequence of data accesses, is advanced for the purpose of distributing the data. However, the optimal distribution of the scalar data for such, or any, reference string is proved NP-complete. A methodology and a polynomial algorithm for an approximate solution are developed. Experiments with representative, but scaled, scientific programs and execution environments display a reduction in cache misses up to two orders in magnitude.</p>
<p>For array data, compile-time predictions of the patterns in which the data is accessed by programs in scalar and array languages are examined. For arbitrary computations in an array language, the determination of the optimal layout of the data is proved to be NP-complete. Polynomial techniques for the approximate solutions to the optimal layout of arrays in both languages, scalar and array, are outlined.</p>
<p>The general applicability of the techniques, in terms of environments other than hierarchical memories, and in terms of interdependence with code manipulations, is discussed. New code optimizations inspired by the data distribution techniques are motivated. The prudence of compiler- over user-optimized data distribution is argued.</p>https://thesis.library.caltech.edu/id/eprint/2740Generative Modeling: An Approach to High Level Shape Design for Computer Graphics and CAD
https://resolver.caltech.edu/CaltechETD:etd-07122007-144802
Authors: Snyder, John Michael
Year: 1991
DOI: 10.7907/HRFJ-QC74
<p>Generative modeling is an approach to computer-assisted geometric modeling. The goal of the approach is to allow convenient and high-level specification of shapes, and provide tools for rendering and analysis of the specified shapes. Shapes include curves, surfaces, and solids in 3D space, as well as higher-dimensional entities such as surfaces deforming in time, and solids with a spatially varying mass density.</p>
<p>Shape specification in the approach involves combining low-dimensional entities, especially 2D curves, into higher-dimensional shapes. This combination is specified through a powerful shape description language which builds multidimensional parametric functions. The language is based on a set of primitive operators on parametric functions which include arithmetic operators, vector and matrix operators, integration and differentiation, constraint solution and global optimization. Although each-primitive operator is fairly simple, high-level shapes and shape building operators can be defined using recursive combination of the primitive operators.</p>
<p>The approach encourages the modeler to build parameterized families of shapes rather than single instances. Shapes can be parameterized by scalar parameters (e.g., time or joint angle) or higher-dimensional parameters (e.g., a curve controlling how the scale of a cross section varies as it is translated). Such parameterized shapes allow easy modification of the design, since the modeler can interact with parameters that relate to high-level properties of the shape. In contrast, many geometric modeling systems use a much lower-level specification, such as through sets of many 3D control points.</p>
<p>Tools for rendering and analysis of generative models are developed using the concept of interval analysis. Each primitive operator on parametric functions has an inclusion function method, which produces an interval bound on the range of the function, given an interval bound on its domain. With these inclusion functions, robust algorithms exist for computing solutions to nonlinear systems of constraints and global minimization problems, when these problems are expressed in the modeling language. These algorithms, in turn, are developed into robust approximation techniques to compute intersections, CSG operations, and offset operations.</p>https://thesis.library.caltech.edu/id/eprint/2865From geometry to texture : experiments towards realism in computer graphics
https://resolver.caltech.edu/CaltechETD:etd-08062007-110815
Authors: Kay, Timothy L.
Year: 1992
DOI: 10.7907/XCAM-R775
This thesis presents a new computer graphics texture element called a texel as well as an associated rendering algorithm, which together produce an appearance never before achieved in computer graphics. Unlike previous modeling primitives, which are limited to solid, crisp appearances (e.g., metal, plastics, and glass), texels have a soft, fuzzy appearance, and thus can be used to create models and images of soft objects.
This thesis presents a solution to the problem of creating fur. As an example, a Teddy bear is modeled and rendered. As part of the process, a new BDRF is developed for texels which can produce back lighting effects. A model deformation technique using trilinear solids is developed.
This thesis then addresses a more complex example, that of creating a microscopic swatch of cloth by computationally "weaving" threads. The process of converting the resulting geometric model into texels is presented. The swatch of cloth is then replicated to cover the infinite plane seamlessly.
A new phenomenon, the texture threshold effect is presented. It is the point at which geometry turns into texture. When viewed from beyond a certain distance threshold, the appearance of a microscopic model will converge to a macroscopic model. The position of the texture threshold is calculated. The infinite cloth model is then rendered from beyond the texture threshold, and its cloth BDRF is extracted computationally. This BDRF is then used to render a cloth-covered car seat.
The BDRF extraction process involves sampling an image which contains spectral energy above the Nyquist limit. Hence, the use of point sampling in computer graphics is analyzed to verify that aliasing energy is controlled. The process of jittered subsampling is analyzed, correcting and completing previous attempts. The results confirm that it is possible to render complex computer graphics imagery avoiding artifacts from aliased energy.
https://thesis.library.caltech.edu/id/eprint/3028