Committee Feed
https://feeds.library.caltech.edu/people/Arvo-J-R/committee.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 14:55:16 +0000A method for the specification, composition, and testing of distributed object systems
https://resolver.caltech.edu/CaltechETD:etd-01252008-095244
Authors: {'items': [{'id': 'Sivilotti-P-A-G', 'name': {'family': 'Sivilotti', 'given': 'Paolo A. G.'}, 'show_email': 'NO'}]}
Year: 1998
DOI: 10.7907/z89g-gm27
The formation of a distributed system from a collection of individual components requires the ability for components to exchange syntactically well-formed messages. Several technologies exist that provide this fundamental functionality, as well as the ability to locate components dynamically based on syntactic requirements. The formation of a correct distributed system requires, in addition, that these interactions between components be semantically well-formed. The method presented in this thesis is intended to assist in the development of correct distributed systems.
We present a specification methodology based on three fundamental operators from temporal logic: initially, next, and transient. From these operators we derive a collection of higher-level operators that are used for component specification. The novel aspect of our specification methodology is that we require that these operators be used in the following restricted manner:
•A specification statement can refer only to properties that are local to a single component.
•A single component must be able to guarantee unilaterally the validity of the specification statement for any distributed system of which it is a part. Specification statements that conform to these two restrictions we call certificates.
The first restriction is motivated by our desire for these component specifications to be testable in a relatively efficient manner. In fact, we describe a set of simplified certificates that can be translated into a testing harness by a simple parser with very little programmer intervention. The second restriction is motivated by our desire for a simple theory of composition: If a certificate is a property of a component, that certificate is also a property of any system containing that component.
Another novel aspect of our methodology is the introduction of a new temporal operator that combines both safety and progress properties. The concept underlying this operator has been used implicitly before; but by extracting this concept into a first-class operator, we are able to prove several new theorems about such properties. We demonstrate the utility of this operator and of our theorems by using them to simplify several proofs.
The restrictions imposed on certificates are severe. Although they have pleasing consequences as described above, they can also lead to lengthy proofs of system properties that are not simple conjunctions. To compensate for this difficulty, we introduce collections of certificates that we call services. Services facilitate proof reuse by encapsulating common component interactions used to establish various system properties.
We experiment with our methodology by applying it to several extended examples. These experiments illustrate the utility of our approach and convince us of the practicality of component-based distributed system development. This thesis addresses three parts of the development cycle for distributed object systems: (i) the specification of systems and components, (ii) the compositional reasoning used to verify that a collection of components satisfy a system specification, and (iii) the validation of component implementations.
https://thesis.library.caltech.edu/id/eprint/341Dynamic load balancing and granularity control on heterogeneous and hybrid architectures
https://resolver.caltech.edu/CaltechETD:etd-02072008-075916
Authors: {'items': [{'id': 'Watts-J-R', 'name': {'family': 'Watts', 'given': 'Jerrell R.'}, 'show_email': 'NO'}]}
Year: 1998
DOI: 10.7907/gvgq-3d11
The past several years have seen concurrent applications grow increasingly complex, as the most advanced techniques from academia find their way into production parallel applications. Moreover, the platforms on which these concurrent computations now execute are frequently heterogeneous networks of workstations and shared-memory multiprocessors, because of their low cost relative to traditional large-scale multicomputers. The combination of sophisticated algorithms and more complex computing environments has made existing load balancing techniques obsolete. Current methods characterize the loads of tasks in very simple terms, often fail to account for the communication costs of an application, and typically consider computational resources to be homogeneous. The complexity of current applications coupled with the fact that they are running in heterogeneous environments has also made partitioning a problem for concurrent execution an ordeal. It is no longer adequate to simply divide the problem into some number of pieces per computer and hope for the best. In a complex application, the workloads of the pieces, which may be equal initially, may diverge over time. On a heterogeneous network, the varying capabilities of the computers will widen this disparity in resource usage even further. Thus, there is a need to dynamically manage the granularity of an application, repartitioning the problem at runtime to correct inadequacies in the original partitioning and to make more effective use of computational resources.
This thesis presents techniques for dynamic load balancing in complex irregular applications. Advances over previous work are three-fold: First, these techniques are applicable to networks comprised of heterogeneous machines, including both single- processor workstations and personal computers, and multiprocessor compute servers. Second, the use of load vectors more accurately characterizes the resource requirements of tasks, including the computational demands of different algorithmic phases as well as the needs for other resources, such as memory. Finally, runtime repartitioning adjusts the granularity of the problem so that the available resources are more fully utilized. Two other improvements over earlier techniques include improved algorithms for determining the ideal redistribution of work as well as advanced techniques for selecting which tasks to transfer to satisfy those ideals. The latter algorithms incorporate the notion of task migration costs, including the impact on an application's communications locality. The improvements listed above are demonstrated on both industrial applications and small parametric problems on networks of heterogeneous computers as well as traditional large-scale multicomputers.
https://thesis.library.caltech.edu/id/eprint/539Performance modeling for concurrent particle simulations
https://resolver.caltech.edu/CaltechETD:etd-01242008-132610
Authors: {'items': [{'id': 'Rieffel-M-A', 'name': {'family': 'Rieffel', 'given': 'Marc A.'}, 'show_email': 'NO'}]}
Year: 1998
DOI: 10.7907/sx57-5d89
This thesis develops an application- and architecture-independent framework for predicting the runtime and memory requirements of particle simulations in complex three-dimensional geometries. Both sequential and concurrent simulations are addressed, on a variety of homogeneous and heterogeneous architectures. The models are considered in the context of neutral flow Direct Simulation Monte Carlo (DSMC) simulations for semiconductor manufacturing and aerospace applications.
Complex physical and chemical processes render algorithmic analysis alone insufficient for understanding the performance characteristics of particle simulations. For this reason, detailed knowledge of the interaction between the physics and chemistry of a problem and the numerical method used to solve it is required.
Prediction of runtime and storage requirements of sequential and concurrent particle simulations is possible with the use of these models. The feasibility of simulations for given physical systems can also be determined. While the present work focuses on the concurrent DSMC method, the same modeling techniques can be applied to other numerical methods, such as Particle-In-Cell (PIC) and Navier-Stokes (NS).
https://thesis.library.caltech.edu/id/eprint/326A structured approach to parallel programming
https://resolver.caltech.edu/CaltechETD:etd-01242008-074143
Authors: {'items': [{'id': 'Massingill-B-L', 'name': {'family': 'Massingill', 'given': 'Berna Linda'}, 'show_email': 'NO'}]}
Year: 1998
DOI: 10.7907/5ma9-h225
Parallel programs are more difficult to develop and reason about than sequential programs. There are two broad classes of parallel programs: (1) programs whose specifications describe ongoing behavior and interaction with an environment, and (2) programs whose specifications describe the relation between initial and final states. This thesis presents a simple, structured approach to developing parallel programs of the latter class that allows much of the work of development and reasoning to be done using the same techniques and tools used for sequential programs. In this approach, programs are initially developed in a primary programming model that combines the standard sequential model with a restricted form of parallel composition that is semantically equivalent to sequential composition. Such programs can be reasoned about using sequential techniques and executed sequentially for testing. They are then transformed for execution on typical parallel architectures via a sequence of semantics-preserving transformations, making use of two secondary programming models, both based on parallel composition with barrier synchronization and one incorporating data partitioning. The transformation process for a particular program is typically guided and assisted by a parallel programming archetype, an abstraction that captures the commonality of a class of programs with similar computational features and provides a class-specific strategy for producing efficient parallel programs. Transformations may be applied manually or via a parallelizing compiler. Correctness of transformations within the primary programming model is proved using standard sequential techniques. Correctness of transformations between the programming models and between the models and practical programming languages is proved using a state-transition-based operational model.
This thesis presents: (1) the primary and secondary programming models, (2) an operational model that provides a common framework for reasoning about programs in all three models, (3) a collection of example program transformations with arguments for their correctness, and (4) two groups of experiments in which our overall approach was used to develop example applications. The specific contribution of this work is to present a unified theory/practice framework for this approach to parallel program development, tying together the underlying theory, the program transformations, and the program-development methodology.
https://thesis.library.caltech.edu/id/eprint/321Analysis of scalable algorithms for dynamic load balancing and mapping with application to photo-realistic rendering
https://resolver.caltech.edu/CaltechETD:etd-01232008-111520
Authors: {'items': [{'email': 'heirich@alumni.caltech.edu', 'id': 'Heirich-A-B', 'name': {'family': 'Heirich', 'given': 'Alan Bryant'}, 'show_email': 'NO'}]}
Year: 1998
DOI: 10.7907/ZVYW-H876
This thesis presents and analyzes scalable algorithms for dynamic load balancing and mapping in distributed computer systems. The algorithms are distributed and concurrent, have no central thread of control, and require no centralized communication. They are derived using spectral properties of graphs: graphs of physical network links among computers in the load balancing problem, and graphs of logical communication channels among processes in the mapping problem. A distinguishing characteristic of these algorithms is that they are scalable: the expected cost of execution does not increase with problem scale. This is proven in a scalability theorem which shows that, for several simple disturbance models, the rate of convergence to a solution is independent of scale. This property is extended through simulated examples and informal argument to general and random disturbances. A worst case disturbance is presented and shown to occur with vanishing probability as the problem scale increases. To verify these conclusions the load balancing algorithm is deployed in support of a photo-realistic rendering application on a parallel computer system based on Monte Carlo path tracing. The performance and scaling of this application, and of the dynamic load balancing algorithm, are measured on different numbers of computers. The results are consistent with the predictions of scalability, and the cost of load balancing is seen to be non-increasing for increasing numbers of computers. The quality of load balancing is evaluated and compared with the quality of solutions produced by competing approaches for up to 1,024 computers. This comparison shows that the algorithm presented here is as good as or better than the most popular competing approaches for this application. The thesis then presents the dynamic mapping algorithm, with simulations of a model problem, and suggests that the pair of algorithms presented here may be an ideal complement to more expensive algorithms such as the well-known recursive spectral bisection.
https://thesis.library.caltech.edu/id/eprint/304Visual methods for three-dimensional modeling
https://resolver.caltech.edu/CaltechETD:etd-02072008-115723
Authors: {'items': [{'id': 'Bouguet-J', 'name': {'family': 'Bouguet', 'given': 'Jean-Yves'}, 'show_email': 'NO'}]}
Year: 1999
DOI: 10.7907/hc2c-sp47
Most animals use vision as a primary sensor to interact with their environment. Navigation or manipulation of objects are among the tasks that can be better achieved while understanding the three-dimensional structure of the scene.
In this thesis, we present a variety of computational techniques for estimating 3D shape from 2D images, based on both passive and active technologies.
The first proposed method is purely passive. In this technique, a single camera is moved in an unconstrained manner around the scene to model as it acquires a sequence of images. The reconstruction process consists then of retrieving the trajectory of the camera, as well as the 3D structure of the scene using only the information contained in the images.
The second method is based on active lighting technology. In the philosophy of standard 3D scanning methods, a projector is used to project light patterns in the scene. The shape of the scene is then inferred from the way the patterns deform on the objects. The main novelty of our scheme compared to traditional methods is in the nature of the patterns, and the type of image processing associated to them. Instead of using standard binary patterns made out of black and white stripes, our scheme uses a sequence of grayscale patterns with a sinusoidal profile in brightness intensity. This choice allows us to establish correspondence (between camera image, and projector image) in a dense fashion, leading to depth computation at (almost)
every pixel in the image.
The last reconstruction method that we propose in this thesis is an alternative 3D scanning scheme that does not require any other device besides a camera. The main idea is to substitute the projector by a standard light source (such as a desk lamp), and use a pencil (or any other object with a straight edge) to cast planar shadows in the scene. The 3D geometry of the scene is then inferred from the way the shadow naturally deforms on the objects in the scene. Since this technology is largely inspired from structured lighting techniques, we call it 'weakly structured lighting.'
https://thesis.library.caltech.edu/id/eprint/541Visual input for pen-based computers
https://resolver.caltech.edu/CaltechETD:etd-03152006-094551
Authors: {'items': [{'email': 'mariomu@vision.caltech.edu', 'id': 'Munich-M-E', 'name': {'family': 'Munich', 'given': 'Mario Enrique'}, 'orcid': '0000-0002-6665-7473', 'show_email': 'YES'}]}
Year: 2000
DOI: 10.7907/1VW0-ZG46
The development of computer technology has had a parallel evolution of the interface between humans and machines, giving rise to interface systems inspired by human communication. Vision has been demonstrated to be the sense of choice for face recognition, gesture recognition, lip reading, etc. This thesis presents the design and implementation of a camera-based, human-computer interface for acquisition of handwriting. The camera focuses on the sheet of paper and images the hand writing; computer analysis of the resulting sequence of images enables the trajectory of the pen to be tracked and the times when the pen is in contact with the paper to be detected. The recovered trajectory is shown to have sufficient spatio-temporal resolution and accuracy to enable handwritten character recognition.
Signatures can be acquired with the camera-based interface with enough resolution to perform verification. This thesis describes the performance of a visual-acquisition signature verification system, emphasizing the importance of the parameterization of the signature to achieving good classification results. The generalization error of the verification algorithm is estimated using a technique that overcomes the small number of example signatures and forgeries provided by the subjects.
The problem of establishing correspondence and measuring the similarity of a pair of planar curves, in our case a pair of signatures, arises in many application in computer vision and pattern recognition. This thesis presents a new method for comparing planar curves and for performing matching at sub-sampling resolution. The analysis of the algorithm as well as its structural properties are described. The performance of the new technique is evaluated for the problem of signature verification and compared to the performance of the well-known Dynamic Programming Matching algorithm.https://thesis.library.caltech.edu/id/eprint/962Kinematic Measurement and Feature Sets for Automatic Speech Recognition
https://resolver.caltech.edu/CaltechTHESIS:11192010-074900476
Authors: {'items': [{'id': 'Fain-D-C', 'name': {'family': 'Fain', 'given': 'Daniel Cark'}, 'show_email': 'NO'}]}
Year: 2001
DOI: 10.7907/9vse-8c78
This thesis examines the use of measured and inferred kinematic information in automatic speech recognition and lipreading, and investigates the relative information content and recognition performance of vowels and consonants. The kinematic information describes the motions of the organs of speech-the articulators. The contributions of this thesis include a new device and set of algorithms
for lipreading (their design, construction, implementation, and testing); incorporation of direct articulator-position measurements into a speech recognizer; and reevaluation of some assumptions regarding vowels and consonants.
The motivation for including articulatory information is to improve modeling of coarticulation
and reconcile multiple input modalities for lipreading. Coarticulation, a ubiquitous phenomenon, is the process by which speech sounds are modified by preceding and following sounds.
To be useful in practice, a recognizer will have to infer articulatory information from sound, video, or both. Previous work made progress towards recovery of articulation from sound. The present project assumes that such recovery is possible; it examines the advantage of joint acousticarticulatory representations over acoustic-only. Also reported is an approach to recovery from video in which camera placement (side view, head-mounted) and lighting are chosen to robustly obtain
lip-motion information.
Joint acoustic-articulatory recognition experiments were performed using the University of Wisconsin X-ray Microbeam Speech Production Database. Speaker-dependent monophone recognizers, based on hidden Markov models, were tested on paragraphs each lasting about 20 seconds. Results
were evaluated at the phone level and tabulated by several classes (vowel, stop, and fricative). Measured articulator coordinates were transformed by principal components analysis, and velocity and acceleration were appended. Concatenating the transformed articulatory information to a standard acoustic (cepstral) representation reduced the error rate by 7.4 %), demonstrating across-speaker
statistical significance (p = 0.018). Articulation improved recognition of male speakers more than female, and recognition of vowels more than fricatives or stops.
The analysis of vowels, stops, and fricatives included both the articulatory recognizer of chapter 3 and other recognizers for comparison. The information content of the different classes was also estimated. Previous assumptions about recognition performance are false, and findings of information content require consonants to be defined to include vowel-like sounds.https://thesis.library.caltech.edu/id/eprint/6182Mathematical Methods for Image Synthesis
https://resolver.caltech.edu/CaltechTHESIS:08272010-091235772
Authors: {'items': [{'id': 'Chen-Min-Computer-Science', 'name': {'family': 'Chen', 'given': 'Min'}, 'show_email': 'NO'}]}
Year: 2002
DOI: 10.7907/WDPH-N912
<p>This thesis presents the application of some advanced mathematical methods to image synthesis. The mainstream of our work is to formulate and analyze some rendering problems in terms of mathematical concepts, and develop some new mathematical machineries to pursue analytical solutions or nearly analytical approximations to them. An enhanced Taylor expansion formula is derived for the perturbation of a general ray-traced path and new theoretical results are presented for spatially-varying luminaires. On top of them, new deterministic algorithms are presented for simulating direct lighting and other scattering effects involving a wide range of non-diffuse surfaces and spatially-varying luminaires. Our work greatly extends the repertoire of non-Lambertian effects that can be handled in a deterministic fashion.</p>
<p>First, my previous work on "Perturbation Methods for Image Synthesis” is extended here in several ways: 1) I propose a coherent framework using closed-form path Jacobians and path Hessians to perturb a general ray-traced path involving both specular reflections and refractionsi and an algorithm similar to that used for interactive specular reflections is employed to simulate lens effects. 2) The original path Jacobian formula is simplified by means of matrix manipulations. 3) Path Jacobians and Hessians are extended to parametric surfaces which may not have an implicit definition. 4) Theoretical comparisons and connections are made with related work including pencil tracing and ray differentials. 5) Identify potential applications of perturbation methods of this nature in rendering and computer vision.</p>
<p>Next, a closed-form solution is derived for the irradiance at a point on a surface due to an arbitrary polygonal Lambertian luminaire with linearly-varying radiant exitance. The solution consists of elementary functions and a single well-behaved special function known as the Clausen integral. The expression is derived from the Taylor expansion and a recurrence formula derived for an extension of double-axis moments, and then verified by Stokes' theorem and Monte Carlo simulation. The study of linearly-varying luminaires introduces much of the machinery needed to derive closed-form solutions for the general case of luminaires with radiance distributions that vary polynomially in both position and direction.</p>
<p>Finally, the concept of irradiance tensors is generalized to account for inhomogeneous radiant exitance distributions from luminaires. These tensors are comprised of scalar elements that consist of constrained rational polynomials integrated over regions of the sphere, which arise frequently in simulating some rendering effects due to polynomially-varying luminaires. Several recurrence relations are derived for generalized irradiance tensors and their scalar elements, which reduce the surface integrals associated with spatially-varying luminaires to one-dimensional boundary integrals, leading to closed-form solutions in polyhedral environments. These formulas extend the range of illumination and non-Lambertian effects that can be computed deterministically, which includes illumination from polynomially-varying luminaires, reflections from and transmissions through glossy surfaces due to these emitters. Particularly, we have derived a general tensor formula for the irradiance due to a luminaire whose radiant exitance varies according to a monomial of any order, which subsumes Lambert's formula and expresses the solution for higher order monomials in terms of those for lower-order cases.</p>https://thesis.library.caltech.edu/id/eprint/6013Automating Resource Management for Distributed Business Processes
https://resolver.caltech.edu/CaltechETD:etd-11012005-093745
Authors: {'items': [{'email': 'ginis@alumni.caltech.edu', 'id': 'Ginis-Roman', 'name': {'family': 'Ginis', 'given': 'Roman'}, 'show_email': 'YES'}]}
Year: 2002
DOI: 10.7907/9GXT-BD03
A distributed business process is a set of related activities performed by independent resources offering services for lease. For instance, constructing an office building involves hundreds of activities such as excavating, plumbing and carpentry performed by machines and subcontractors, whose activities are related in time, space, cost and other dimensions. In the last decade Internet-based middleware has linked consumers with resources and services enabling the consumers to more efficiently locate, select and reserve the resources for use in business processes. This recent capability creates an opportunity for a new automation of resource management that can assign the optimal resources to the activities of a business process to maximize its utility to the consumer and yield substantial gains in operational efficiency.
This thesis explores two basic problems towards automating the management of distributed business processes: 1. How to choose the best resources for the activities of a process (the Activity Resource Assignment - ARA - optimization problem); and 2. How to reserve the resources chosen for a process as an atomic operation when time has value, i.e., commit all resources or no resources (the Distributed Service Commit problem - DSC). I believe these will become the typical optimization and agreement problems between consumers and producers in a networked service economy.
I propose a solution to the ARA optimization problem by modeling it as a special type of Integer Programming and I give a method for solving it efficiently for a large class of practical cases. Given a problem instance the method extracts the structure of the problem and using a new concept of variable independence recursively simplifies it while retaining at least one optimal solution. The reduction operation is guided by a novel procedure that makes use of the recent advances in tree-decomposition of graphs from the graph complexity theory.
The solution to the DSC problem is an algorithm based on financial instruments and the two-phase commit protocol adapted for services. The method achieves an economically sensible atomic reservation agreement between multiple distributed resources and consumers in a free market environment.
I expect the automation of resource management addressed in my thesis and elsewhere will pave the way for more efficient business operations in the networked economy.https://thesis.library.caltech.edu/id/eprint/4357Discrete Exterior Calculus
https://resolver.caltech.edu/CaltechETD:etd-05202003-095403
Authors: {'items': [{'email': 'hirani@illinois.edu', 'id': 'Hirani-Anil-Nirmal', 'name': {'family': 'Hirani', 'given': 'Anil Nirmal'}, 'orcid': '0000-0003-3506-1703', 'show_email': 'YES'}]}
Year: 2003
DOI: 10.7907/ZHY8-V329
<p>This thesis presents the beginnings of a theory of discrete exterior calculus (DEC). Our approach is to develop DEC using only discrete combinatorial and geometric operations on a simplicial complex and its geometric dual. The derivation of these may require that the objects on the discrete mesh, but not the mesh itself, are interpolated.</p>
<p>Our theory includes not only discrete equivalents of differential forms, but also discrete vector fields and the operators acting on these objects. Definitions are given for discrete versions of all the usual operators of exterior calculus. The presence of forms and vector fields allows us to address their various interactions, which are important in applications. In many examples we find that the formulas derived from DEC are identitical to the existing formulas in the literature. We also show that the circumcentric dual of a simplicial complex plays a useful role in the metric dependent part of this theory. The appearance of dual complexes leads to a proliferation of the operators in the discrete theory.</p>
<p>One potential application of DEC is to variational problems which come equipped with a rich exterior calculus structure. On the discrete level, such structures will be enhanced by the availability of DEC. One of the objectives of this thesis is to fill this gap. There are many constraints in numerical algorithms that naturally involve differential forms. Preserving such features directly on the discrete level is another goal, overlapping with our goals for variational problems.</p>
<p>In this thesis we have tried to push a purely discrete point of view as far as possible. We argue that this can only be pushed so far, and that interpolation is a useful device. For example, we found that interpolation of functions and vector fields is a very convenient. In future work we intend to continue this interpolation point of view, extending it to higher degree forms, especially in the context of the sharp, Lie derivative and interior product operators. Some preliminary ideas on this point of view are presented in the thesis. We also present some preliminary calculations of formulas on regular nonsimplicial complexes</p>https://thesis.library.caltech.edu/id/eprint/1885Shape Reconstruction from Shadows and Reflections
https://resolver.caltech.edu/CaltechETD:etd-05242005-162056
Authors: {'items': [{'email': 'savarese@vision.caltech.edu', 'id': 'Savarese-Silvio', 'name': {'family': 'Savarese', 'given': 'Silvio'}, 'show_email': 'NO'}]}
Year: 2005
DOI: 10.7907/FH0V-3M10
<p>Measuring automatically the shape of physical objects in order to obtain corresponding digital models has become a useful, often indispensable, tool in design, engineering, art conservation, computer graphics, medicine and science. Machine vision has proven to be more appealing than competing technologies. Ideally, we would like to be able to acquire digital models of generic objects by simply walking around the scene, while filming with a handheld camcorder. Thus, one of the main challenges in modern machine vision is to develop algorithms that: i) are inexpensive, fast and accurate; ii) can handle objects with arbitrary appearance properties and shape; and iii) need little or no user intervention.</p>
<p>In this thesis, we address both issues. In the first part, we present a novel 3D reconstruction technique which makes use of minimal and inexpensive equipment. We call this technique "shadow carving". We explore the information contained in the shadows that an object casts upon itself. An algorithm is provided that makes use of this information. The algorithm iteratively recovers an estimate of the object which i) approximates the object’s shape more and more closely; and ii) is provably an upper bound to the object's shape. Shadow carving is the first technique to incorporate "shadow" information in a multi-view shape recovery framework. We have implemented our approach in a simple table-top system and validated our algorithm by recovering the shape of real objects.</p>
<p>It is well known that vision-based 3D scanning systems handle specular or highly reflective surfaces only poorly. The cause of this deficiency is most likely not intrinsic, but rather due to our lack of understanding of the relevant cues. In the second part of this thesis, we focus on how to promote mirror reflections from "noise" to "signal". We first present a geometrical and algebraic characterization of how a patch of the scene is mapped into an image by a mirror surface of given shape. We then develop solutions to the inverse problem of deriving surface shape from mirror reflections in a single image. We validate our theoretical results with both numerical simulations and experiments with real surfaces.</p>
<p>A third goal of this thesis is advancing our understanding of human perception of shape from reflections. Although the idea of perception of shape from different visual cues (e.g., shading, texture, etc.) has been extensively discussed in the past, little is known to what extent highlights and specular reflections carry useful information for shape perception. We use psychophysics to study this capability. Our goal is to provide a benchmark, as well as inspire possible technical approaches, for our computational work. We find that surprisingly, humans are very poor at judging the shape of mirror surfaces when additional visual cues (i.e., contour, shading, stereo, texture) are not visible.</p>https://thesis.library.caltech.edu/id/eprint/2002