Advisor Feed
https://feeds.library.caltech.edu/people/Antonsson-E-K/advisor.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 14:54:26 +0000A method for the representation and manipulation of uncertainties in preliminary engineering design
https://resolver.caltech.edu/CaltechETD:etd-11152007-080746
Authors: {'items': [{'email': 'wood@mail.utexas.edu', 'id': 'Wood-K-L', 'name': {'family': 'Wood', 'given': 'Kristin Lee'}, 'show_email': 'NO'}]}
Year: 1990
DOI: 10.7907/g1hs-p655
Each stage of the engineering design process, and particularly the preliminary phase, includes imprecision, stochastic uncertainty, and possibilistic uncertainty. A technique is presented by which the various levels of imprecision (where imprecision is: "uncertainty in choosing among alternatives") in the description of design elements may be represented and manipulated. The calculus of Fuzzy Sets provides the foundation of the approach. An analogous method to representing and manipulating imprecision using probability calculus is presented and compared with the fuzzy calculus technique. Extended Hybrid Numbers are then introduced to combine the effects of imprecision with stochastic and possibilistic uncertainty. Using the results, a preliminary set of metrics is proposed by which a designer can make decisions among alternative configurations in preliminary design.
In general, the hypothesis underlying the techniques described above is that making more information available than conventional approaches will enhance the decision-making capability of the designer in preliminary design. A number of elemental concepts toward this hypothesis have been formulated during the evolution of this work:
• Imprecision is a hallmark of preliminary engineering design. To carry out decisions based on the information available to the designer and on basic engineering principles, the imprecise descriptions of possible solution technologies must be formalized and quantified in some way. The application of the fuzzy calculus along with a fundamental interpretation provides a new and straight-forward means by which imprecision can be represented and manipulated.
• Besides imprecision, other uncertainties, categorized as stochastic and possibilistic, are prevalent in design, even in the early stages of the design process. Providing a method by which these uncertainties can be represented in the context of the imprecision is an important and necessary step when considering the evaluation of a design's performance. Extended Hybrid Numbers have been introduced in this work in order to couple the stochastic and possibilistic components of uncertainty with imprecision such that no information is lost in the process.
• Because of the size, coupling, and complexity of the functional requirement space in any realistic design, it is difficult to make decisions with regard to the performance of a design, even with an Extended Hybrid Number representation. Defining and utilizing metrics (or figures of merit) in the evaluation of how well a design meets the functional requirements reduces the complexity of this process. Such metrics also have merit when we begin to think of languages of design and adding the necessary pragmatics of "will a generated or proposed design satisfy the performance requirements with respect to the ever-present and unavoidable uncertainties?". These concepts form the central focus of this work. The mathematical methods presented here were developed to support and formalize these ideas.https://thesis.library.caltech.edu/id/eprint/4581High-resolution optoelectronic and photogrammetic 3-D surface geometry acquisition and analysis
https://resolver.caltech.edu/CaltechETD:etd-08292007-091850
Authors: {'items': [{'id': 'Hsueh-W', 'name': {'family': 'Hsueh', 'given': 'Wen-Jean'}, 'show_email': 'NO'}]}
Year: 1993
DOI: 10.7907/wn0s-6y22
A high-resolution, high-speed, automatic, and non-contact 3-D surface geometry measuring system has been developed. It is based on a photogrammetric and optoelectronic technique that adopts lateral-photoeffect diode detectors sensitive in the near-infrared range. Two cameras in stereo positions are both equipped with the large 2-axis analog detectors. A light beam is focused and scanned onto the surface of an object as a very small light spot. Excitations on detectors generated by the reflected light from the spot create photocurrents that are transformed into 2-D position signals in a very short time. A simple set of calculations is done to photogrammetrically triangulate two sets of 2-D coordinates from the detectors into the 3-D coordinates of the light spot. Because only one small light spot in the scene is illuminated at a time, the stereo-correspondence problem is solved in real time. The detectors are able to collect data at 10 KHz with 4,096x4,096 resolution based on a 12-bit A/D converter. The resolution and precision can be improved up to eight times by oversampling. The system is able to resolve, for example, less than 10 µm from 47 cm away with a nominal viewing volume of (22 cm)[superscript 3]. Its performance is better than contemporary coordinate measuring, range finding, shape digitizing, and machine vision systems, and is comparable to the best aspects of each existing system. The irregular 3-D data it generates can be regularized so that data processing algorithms designed for image systems may be applied. The system is designed for the acquisitions of general surface geometries, such as fabricated parts, machined surfaces, biological surfaces, and deformed parts. The system will be useful in solving a variety of 3-D surface geometry measuring problems in engineering design, manufacturing, inspection, robot kinematics measurement, and vision.
https://thesis.library.caltech.edu/id/eprint/3268MEMS design : the geometry of silicon micromachining
https://resolver.caltech.edu/CaltechETD:etd-09162005-134646
Authors: {'items': [{'email': 'ted.hubbard@dal.ca', 'id': 'Hubbard-T-J', 'name': {'family': 'Hubbard', 'given': 'Ted J.'}, 'show_email': 'NO'}]}
Year: 1994
DOI: 10.7907/TK4C-M144
The design of MEMS (Micro Electro Mechanical Systems) on the millimeter to micron length scales will be examined in this thesis.
A very broad base of knowledge has been developed concerning the etching processes commonly used in MEMS fabrication. The fundamental problem we have sent out to study is how to model the shape transformations that occur in MEMS fabrication. The ultimate goal is to determine the required input mask geometry for a desired output etched shape.
The body of work begins with the crystal structure of silicon and ends with etched shapes. The underlying crystal structure causes different rates for different directions; this behavior has been modeled to obtain rate models. The information in these rate models has then been used in a number of shape modelers. High level models like the Eshape model provide not only simulation but a framework for true design. Other models such as the Cellular Automata model take a different approach and provide flexible and robust simulators. The tools were used to develop real world MEMS applications such as compensation structures.
As important as the individual models, is the ability to integrate them together to a coherent design tool and allow information to flow between different parts. This synthesis allows a fuller understanding of the etching process from start to finish.
It is important to note that while this thesis deals with etching, the methods developed are very general and are applicable to many shape transformation processes.
https://thesis.library.caltech.edu/id/eprint/3565Robust mask-layout and process synthesis in micro-electro-mechanical-systems (MEMS) using genetic algorithms
https://resolver.caltech.edu/CaltechETD:etd-08302005-131428
Authors: {'items': [{'id': 'Ma-L', 'name': {'family': 'Ma', 'given': 'Lin'}, 'show_email': 'NO'}]}
Year: 2001
DOI: 10.7907/e8hm-z754
This thesis reports a Genetic Algorithm approach for the mask-layout and process flow synthesis problem. For a given desired target shape, an optimal mask-layout and process flow can be automatically generated using the Genetic Algorithm synthesis approach. The Genetic Algorithm manipulates and evolves a population of candidate solutions (mask-layouts and process parameters) by utilizing a process simulation tool to evaluate the performance of the candidate solutions. For the mask-layout and process flow synthesis problem, encoding schemes, selection schemes, and genetic operations have been developed to effectively explore the solution space and control the evolution and convergence of the solutions.
The synthesis approach is tested for mask-layout and process synthesis for bulk wet etching. By integrating a bulk wet etching simulation tool into the Genetic Algorithm iterations, the algorithm can automatically generate proper mask-layout and process flow which can fabricate 3-D geometry close to the desired 3-D target shape. For structures with convex corners, complex compensation structures can be synthesized by the algorithm. More importantly, the process flow can also be synthesized. For multi-step wet etching processes, proper etchant sequence and etch times for each etch step can be synthesized automatically by the algorithm. When the choice of different process flows exists, the enlarged solution space makes the design problem more challenging. The ability to synthesize process flows makes the automatic design method more complete and more valuable.
The algorithm is further extended to achieve robust design. Since fabrication variations and modeling inaccuracy always exist, the synthesized solutions without considering these variations may not generate satisfactory results in actual fabrication. Robust design methods are developed to synthesize robust mask-layouts and process flows in "noisy" environment. Since the synthesis procedure considers the effect of variations in the fabrication procedures, the final synthesized solution will have high robustness to the variations, and will generate satisfactory results under a variety of fabrication conditions. The robust design approaches are implemented and tested for robust mask-layout design for mask misalignment and etch rate variations. Mask-layouts robust to mask misalignment noise and etch rate variations during the fabrication can be synthesized. The synthesized mask-layouts generally improve the yield significantly by exhibiting consistent performance under a variety of fabrication conditions.https://thesis.library.caltech.edu/id/eprint/3279Efficient Automatic Engineering Design Synthesis Via Evolutionary Exploration
https://resolver.caltech.edu/CaltechTHESIS:05032011-082523919
Authors: {'items': [{'id': 'Lee-Cin-Young', 'name': {'family': 'Lee', 'given': 'Cin-Young'}, 'show_email': 'NO'}]}
Year: 2002
DOI: 10.7907/5HRP-ND58
The evolution of designs in nature has been the inspiration for this thesis, which seeks to develop a framework for efficient automatic engineering design synthesis based on evolutionary methods.
The design synthesis process is equated to an evolutionary process. Because of this, the same formalization for evolution, the evolution algorithm, is used as a design synthesis formalism. Implementation of the evolution algorithm on a computer allows evolution of non-biological systems, and, hence, automatic engineering design synthesis. The early and canonical versions of such evolutionary computation are bare bones evolution tools that neglect several key aspects of evolutionary systems. Some universal aspects of good designs are identified, three of which are dealt with in this thesis. These are variable complexity, modularity, and speciation.
Framed in an evolutionary context, each of these characteristics are requisites for being able to evolve in correspondence with a dynamic environment. Those that are most evolvable will survive. After all, if a species cannot evolve quickly enough with changes in the environment, it will perish. In a design context, this indicates that the characteristics are vital for efficiency and shorter design cycles.
An integrated framework is developed to address all three aspects individually or in any combination thereof, which has not been done heretofore. Because of the poor theoretical foundations of evolutionary computation, the effectiveness of the developed approach is determined through computer experimentation on several test and design problems. Results are promising as all three
aspects were successfully achieved in comparison to canonical evolutionary computation.
https://thesis.library.caltech.edu/id/eprint/6369Set Mapping in the Method of Imprecision
https://resolver.caltech.edu/CaltechETD:etd-10032002-214953
Authors: {'items': [{'id': 'Wang-Xiaoou', 'name': {'family': 'Wang', 'given': 'Xiaoou'}, 'show_email': 'NO'}]}
Year: 2003
DOI: 10.7907/J0JH-M190
<p>The Method of Imprecision, or MoI, is a semi-automated set-based approach which uses mathematics of fuzzy sets to aid the designer making decisions with imprecise information in the preliminary design stage.</p>
<p>The Method of Imprecision uses preference to represent the imprecision in engineering design. The preferences are specified both in the design variable space (DVS) and the performance variable space (PVS). To reach the overall preference which is needed to evaluate designs, the mapping between the DVS and the PVS should be explored. Many engineering design tools can only produce precise results with precise specifications, and usually the cost is high. In the preliminary stage, the specifications are imprecise and resources are limited. Hence, it is not cost-effective nor necessary to use these engineering design tools directly to study the mapping between the DVS and the PVS. An interpolation model is introduced to the MoI to construct metamodels for the actual mapping function between the DVS and the PVS. Due to the nature of engineering design, multistage metamodels are needed. Experimental design is used to choose design points for the first metamodel. In order to find an efficient way to choose design points when a priori information is available, many sampling criteria are discussed and tested on two specific examples. The difference between different sampling criteria when the number of added design points is small, while more design points do improve the accuracy of the metamodel substantially.</p>
<p>The metamodels can be used to induce preferences in the DVS or the PVS according to the extension principle. The Level Interval Algorithm (LIA) is a discrete approximate implementation of the extension principle. The resulting preference by the LIA is presented as an alpha-cut, which is the set of designs or performances with a certain level of preference. There are some limitations of the LIA, especially for multidimensional DVS and PVS. A new extension of the LIA is proposed to compute alpha-cuts with more accuracy and less limitations. The designers have more control over the trade-off between the cost and accuracy of the computation with the new extension of the LIA.</p>
<p>The results of the Method of Imprecision should be the set of alternative designs in the DVS at a certain preference level, and the set of achievable performances in the PVS. The information about preferences in the DVS and the PVS is needed to transfer back and forth. Usually the mapping from the PVS to the DVS is unavailable, while it is needed to induce preference in the DVS from the PVS. A new method is constructed to compute the alpha-cuts in both spaces from preferences specified in the DVS and the PVS.</p>
<p>Finally, a new measure is proposed to find the most cost-effective sampling region of new design points for a metamodel. Also, the full implementation of the Method of Imprecision is listed in detail. Then it is applied to an example of the structure design of a passenger vehicle, and comparisons are made between the new results and previous results.</p>https://thesis.library.caltech.edu/id/eprint/3884Engineering Design Synthesis of Sensor and Control Systems for Intelligent Vehicles
https://resolver.caltech.edu/CaltechETD:etd-05252006-221412
Authors: {'items': [{'id': 'Zhang-Yizhen', 'name': {'family': 'Zhang', 'given': 'Yizhen'}, 'show_email': 'NO'}]}
Year: 2006
DOI: 10.7907/NB6H-S822
<p>This thesis investigates the application of formal engineering design synthesis methodologies to the development of sensor and control systems for intelligent vehicles.</p>
<p>A formal engineering design synthesis methodology based on evolutionary computation is presented, with special emphasis on dealing with modern engineering design challenges, such as high or variable complexity of design solutions, multiple conflicting design objectives, and noisy evaluation results, etc. The efficacy of the evolutionary design synthesis method is validated through multiple different case studies, where a variety of novel design solutions are generated to represent different engineering design trade-offs, and they have achieved performances comparable to, if not better than, that of hand-coded solutions in the same simplified environment. More importantly, this automatic design synthesis method shows great potential to handle more complex design problems, where a good hand-coded solution may be very difficult or even impossible to obtain. Moreover, the evolutionary design synthesis methodology appears promising to deal with uncertainty in the problem efficiently and adapt to the collective task nature well.</p>
<p>In addition, multiple levels of vehicle simulation models with different computational cost and fidelity as well as necessary driver behaviors are implemented for different types of simulation experiments conducted for different research purposes. Efforts are made to try to generate good candidate solutions efficiently with less computational time and human engineering effort.</p>
<p>Furthermore, a new threat assessment measure, time-to-last-second-braking (<i>T<sub>lsb</sub></i>), is proposed, which directly characterizes human natural judgment of the urgency and severity of threats in terms of time. Based on driver reaction time experimental results, new warning and overriding criteria are proposed in terms of the new <i>T<sub>lsb</sub></i> measure, and the performance is analyzed statistically in terms of two typical sample pre-crash traffic scenarios. Less affected by driver behavior variability, the new criteria characterize the current dynamic situations better than the previous ones, providing more appropriate warning and more effective overriding at the last moment. Finally, the possibility of frontal collision avoidance through steering (lane-changing) is discussed, and similarly the time-to-last-second-steering (<i>T<sub>lss</sub></i>) measure is proposed and compared with <i>T<sub>lsb</sub></i>.</p>https://thesis.library.caltech.edu/id/eprint/2062Information-Theoretic Methods for Modularity in Engineering Design
https://resolver.caltech.edu/CaltechETD:etd-05282007-183612
Authors: {'items': [{'email': 'ewangbw@yahoo.com', 'id': 'Wang-Bingwen', 'name': {'family': 'Wang', 'given': 'Bingwen'}, 'show_email': 'NO'}]}
Year: 2007
DOI: 10.7907/ZBBK-JM82
<p>Due to their many advantages, modular structures commonly exist in artificial and natural systems, and the concept of modular product design has recently received extensive attention from the engineering research community. Although some work has been done on modularity, most of it is qualitative and exploratory in nature, and little is quantitative. One reason for this gap is the lack of a clear definition of modularity. This thesis begins with a detailed discussion on the concepts of “modularity” and “module.”</p>
<p>Based on the background presented here, a mutual information-based method is proposed to quantify modularity. The method is based on the view that coupling is information flow instead of real physical interactions. Information flow can be quantified by mutual information, which is based on randomness (or uncertainty). Since most engineering products can be modeled as stochastic systems and therefore have randomness, the mutual information-based method can be applied in very general cases, and it is shown that the commonly existing linkage counting modularity measure is a special case of the mutual information-based modularity measure.</p>
<p>The mutual information-based method is applicable to final design products. But at the early stage of the engineering design process, there are generally only function diagrams. To exploit the benefits of modularity as early as possible, a minimal description length principle-based modularity measure is proposed to determine the modularity of graph structures, which can represent function diagrams. The method is used as criteria to hierarchically decompose abstract graph structures and the real function structure of an HP printer by evolutionary computation. Due to the specialty of genome representations in evolutionary computation, new genetic operators are developed to determine optimal hierarchical decompositions.</p>
<p>This quantitative modularity measure has been developed to synthesize modular engineering products, especially by evolutionary design. There are many factors affecting evolving modular structures, such as genome representation, fitness function, learning, and task structure. The thesis preliminarily studies the effects of the modularity of tasks on the modularity of products in evolutionary computation. Using feed-forward neural networks as examples, the results show that the effects are task-dependent and rely on the amount of resources available for the tasks.</p>https://thesis.library.caltech.edu/id/eprint/2202Automated Design Synthesis of Structures using Growth Enhanced Evolution
https://resolver.caltech.edu/CaltechETD:etd-06102008-153834
Authors: {'items': [{'email': 'fabien.nicaise@alum.rpi.edu', 'id': 'Nicaise-Fabien', 'name': {'family': 'Nicaise', 'given': 'Fabien'}, 'show_email': 'YES'}]}
Year: 2008
DOI: 10.7907/28H7-E831
<p>Engineering design is a complex problem on generating and evaluating a variety of options. In traditional methods, this typically involves evaluating up to a dozen different point designs. The limit on the process is the amount of time to generate, refine, and evaluate the various concepts. Using a computer helps to speed up the process, but human involvement still remains the weakest link.</p>
<p>The natural extension of this process is to continually and rapid generate, refine, and evaluate concepts entirely automatically. Evolutionary Algorithms provide such a method, by emulating natural evolution. The computer maintains a population point design, each of which is represented by a gene string that is allowed to change (mutate) and combine with other genes (crossover). At each generation, every individual is modified then evaluated and the improved solutions proceed to the next generation.</p>
<p>This thesis will extend the biological model by introducing a growth process to each individual. This is akin to the concept of a multi-cellular organism developing in the womb. An encoding for discrete truss structures is described that provides for such an extension. The truss grows from a few basic elements. After showing several examples demonstrating the growth process, the method is applied to a couple simple examples using evolutionary algorithms.</p>https://thesis.library.caltech.edu/id/eprint/5231Computational Evolutionary Embryogeny
https://resolver.caltech.edu/CaltechETD:etd-01162009-072031
Authors: {'items': [{'email': 'or@aug-wind.com', 'id': 'Yogev-Or', 'name': {'family': 'Yogev', 'given': 'Or'}, 'show_email': 'NO'}]}
Year: 2009
DOI: 10.7907/N4XG-F402
<p>Evolution and development (Evo-Devo), are the two main processes which produce all of the different kinds of phenotypes we see in nature. Evolutionary process is responsible for eliminating the genetic information of weak phenotypes through natural selection, and also for exploring novel genotypes through genetic operations; crossover, mutation. The development process is the process of using the set of rules (codons) written in a genome, to turn a single set (zygote) into a mature phenotype. In this thesis, evolutionary and developmental processes are used to evolve the configurations of three-dimensional structures in silico to achieve desired performances. Although natural systems utilize the combination of both evolution and development processes to produce remarkable performance and diversity, this approach has not yet been applied extensively to the design of continuous three-dimensional load-supporting structures. Beginning with a single artificial cell containing information analogous to a DNA sequence, a structure is grown according to the rules encoded in the sequence. Each artificial cell in the structure contains the same sequence of growth and development rules, and each artificial cell is an element in a finite element mesh representing the structure of the mature individual. Rule sequences are evolved over many generations through selection and survival of individuals in a population.</p>
<p>Modularity and symmetry are visible in nearly every natural and engineered structure. Understanding of the evolution and expression of symmetry and modularity is emerging from recent biological research. Initial evidence of these attributes is present in the phenotypes that are developed from the artificial evolution, although neither characteristic is imposed nor selected for directly.</p>
<p>The computational evolutionary development approach presented here shows promise for synthesizing novel configurations of high-performance systems. The approach may advance system design to a new paradigm, where current design strategies have difficulty producing useful solutions. In addition to a new design approach perse, this model gives us the ability to explore the development process, from the standpoint of complex systems analysis. The phenotypes in our system have been grown under a highly stochastic environment, which serves as a triggered mechanism for gene expression. Still, evolution was able to find solutions which are robust to these stochastic elements, both at the phenotype level (the phenotype ability to function under the environment) and the growth process itself. In addition we have also explored the effects of symmetric and nonsymmetric environment over the topology of the phenotypes; we have found strong evidence that indicates a high correlation between the two. Finally we have also established a tool which enables us to understand the relationship between the environment and the degree of modularity of the phenotype.</p>
https://thesis.library.caltech.edu/id/eprint/208Neuro-Evolution Using Recombinational Algorithms and Embryogenesis for Robotic Control
https://resolver.caltech.edu/CaltechTHESIS:06092010-140839602
Authors: {'items': [{'email': 'tonyroy46@yahoo.com', 'id': 'Roy-Anthony-Mathew', 'name': {'family': 'Roy', 'given': 'Anthony Mathew'}, 'show_email': 'NO'}]}
Year: 2010
DOI: 10.7907/YNED-VN66
Control tasks involving dramatic nonlinearities, such as decision making, can be challenging for classical design methods. However, autonomous, stochastic design methods such as evolutionary computation have proved effective. In particular, genetic algorithms that create designs via the application of recombinational rules are robust and highly scalable. Neuro-Evolution Using Recombinational Algorithms and Embryogenesis (NEURAE) is a genetic algorithm that creates C++ programs that in turn create neural networks which can function as logic gates. The neural networks created are scalable and robust enough to feature redundancies that allow the network to function despite internal failures. An analysis of NEURAE evinces how biologically inspired phenomena apply to simulated evolution. This allows for an optimization of NEURAE that enables it to create controllers for a simulated swarm of Khepera-inspired robots.https://thesis.library.caltech.edu/id/eprint/5944