(PHD, 2017)

Abstract:

This thesis introduces a new way of looking at incompressible fluid dynamics. Specifically, we formulate and simulate classical fluids using a Schrödinger equation subject to an incompressibility constraint. We call such a fluid flow an incompressible Schrödinger flow (ISF). The approach is motivated by Madelung’s hydrodynamical form of quantum mechanics, and we show that it can simulate classical fluids with particular advantage in its simplicity and its ability of capturing thin vortex dynamics. The effective dynamics under an ISF is shown to be an Euler equation modified with a Landau-Lifshitz term. We show that the modifying term not only enhances the dynamics of vortex filaments, but also regularizes the potentially singular behavior of incompressible flows.

Another contribution of this thesis is the elucidation of a general, geometric notion of Clebsch variables. A geometric Clebsch variable is useful for analyzing the dynamics of ISF, as well as representing vortical structures in a general flow field. We also develop an algorithm of approximating a “spherical” Clebsch map for an arbitrarily given flow field, which leads to a new tool for visualizing, analyzing, and processing the vortex structure in a fluid data.

]]>

(PHD, 2014)

Abstract: We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy.

]]>

(PHD, 2013)

Abstract:

This thesis introduces fundamental equations and numerical methods for manipulating surfaces in three dimensions via conformal transformations. Conformal transformations are valuable in applications because they naturally preserve the integrity of geometric data. To date, however, there has been no clearly stated and consistent theory of conformal transformations that can be used to develop general-purpose geometry processing algorithms: previous methods for computing conformal maps have been restricted to the flat two-dimensional plane, or other spaces of constant curvature. In contrast, our formulation can be used to produce—for the first time—general surface deformations that are perfectly conformal in the limit of refinement. It is for this reason that we commandeer the title Conformal Geometry Processing.

The main contribution of this thesis is analysis and discretization of a certain time-independent Dirac equation, which plays a central role in our theory. Given an immersed surface, we wish to construct new immersions that (i) induce a conformally equivalent metric and (ii) exhibit a prescribed change in extrinsic curvature. Curvature determines the potential in the Dirac equation; the solution of this equation determines the geometry of the new surface. We derive the precise conditions under which curvature is allowed to evolve, and develop efficient numerical algorithms for solving the Dirac equation on triangulated surfaces.

From a practical perspective, this theory has a variety of benefits: conformal maps are desirable in geometry processing because they do not exhibit shear, and therefore preserve textures as well as the quality of the mesh itself. Our discretization yields a sparse linear system that is simple to build and can be used to efficiently edit surfaces by manipulating curvature and boundary data, as demonstrated via several mesh processing applications. We also present a formulation of Willmore flow for triangulated surfaces that permits extraordinarily large time steps and apply this algorithm to surface fairing, geometric modeling, and construction of constant mean curvature (CMC) surfaces.

]]>

(PHD, 2012)

Abstract: This thesis presents a new method to construct smooth l- and 2-form subdivision schemes over the 3D volumetric domain. Based on the subdivided 1- and 2-form coefficient field, smooth vector fields can be constructed using Whitney forms. To obtain stencils in the regular setting, classical 0-form subdivision and linear 1- and 2-form subdivision over the octet mesh are introduced. Then, convoluting with a smooth operator, smooth 1- and 2-form subdivision schemes in the regular case can be determined up to one free parameter. This parameter can be determined by a novel technique based on spectrum and momentum considerations. However, artifacts exist in boundary regions because of the incomplete regular support and the shrinking feature of the original 0-form subdivision scheme. To address these problems, the projection-scaling method and the expansion method are introduced and compared. The former method projects arbitrary discrete differential forms to a subspace spanned by low-order potential fields. The algorithm subdivides these potential fields and reconstructs the discrete form in the refined level using linear combinations. Scaling is included for elements near the boundary to offset the effect of mesh shrinkage. Alternatively, for the expansion method, a compatible nonshrinking 0-form subdivision scheme is constructed first. Based on the new 0-form subdivision method, extending 1- and 2-forms beyond the boundary becomes natural. In the experiment, no noticeable artifacts, including attenuation, enlarging or undesirable bend, are found in practice.

]]>

(MS, 2010)

Abstract:

Connections provide a way to compare local quantities defined at different points of a geometric space. This thesis develops a discrete theory of connections that naturally leads to practical, efficient numerical algorithms for geometry processing. Our formulation is motivated by real-world applications where meshes may be noisy or coarsely discretized. Further, because our discrete framework closely parallels the smooth theory, we can draw upon a huge wealth of existing knowledge to develop and interpret mesh processing algorithms.

The main contribution of this thesis is a new algorithm for computing trivial connections on discrete surfaces that are as smooth as possible everywhere but on a set of isolated singularities of given index. A connection is represented via an angle associated with each dual edge, i.e., a discrete angle-valued 1-form. These angles are determined by the solution to a linear system, and are globally optimal in the sense that they describe the trivial connection closest to Levi-Civita among all solutions with the prescribed set of singularities. Relative to previous methods our algorithm is surprisingly simple, and can be implemented using standard operations from mesh processing and linear algebra. The solution can be used to construct rotationally symmetric direction fields with a prescribed set of singularities and directional constraints, which are essential in applications such as quadrilateral remeshing and texture synthesis.

]]>

(PHD, 2010)

Abstract:

The physics of most mechanical systems can be described from a geometric viewpoint; i.e., by defining variational principles that the system obeys and the properties that are being preserved (often referred to as invariants). The methods that arise from properly discretizing such principles preserve corresponding discrete invariants of the mechanical system, even at very coarse resolutions, yielding robust and efficient algorithms. In this thesis geometric interpretations of physical systems are used to develop algorithms for discretization of both space (including proper material discretization) and time. The effectiveness of these algorithms is demonstrated by their application to the simulation of elastic bodies.

Time discretization is performed using variational time integrators that, unlike many of the standard integrators (e.g., Explicit Euler, Implicit Euler, Runge-Kutta), do not introduce artificial numerical energy decrease (damping) or increase. A new physical damping model that does not depend on timestep size is proposed for finite viscoelasticity simulation. When used in conjunction with variational time integrators, this model yields simulations that physically damp the energy of the system, even when timesteps of different sizes are used. The usual root-finding procedure for time update is replaced with an energy minimization procedure, allowing for more precise step size control inside a non-linear solver. Additionally, a study of variational and time-reversible methods for adapting timestep size during the simulation is presented.

Spatial discretization is performed using a finite element approach for finite (non-linear) or linear elasticity. A new method for the coarsening of elastic properties of heterogeneous linear materials is proposed. The coarsening is accomplished through a precomputational procedure that converts the heterogeneous elastic coefficients of the very fine mesh into anisotropic elastic coefficients of the coarse mesh. This method does not depend on the material structure of objects, allowing for complex and non-uniform material structures. Simulation on the coarse mesh, equipped with the resulting elastic coefficients, can then be performed at interactive rates using existing linear elasticity solvers and, if desired, co-rotational methods. A time-reversible integrator is used to improve time integration of co-rotated linear elasticity.

]]>

(PHD, 2008)

Abstract:

Vertex- and face-based subdivision schemes are now routinely used in geometric modeling and computational science, and their primal/dual relationships are well studied. In this thesis we interpret these schemes as defining bases for discrete differential 0- resp. 2-forms, and present a novel subdivision-based method of constructing smooth differential forms on simplicial surfaces. It completes the picture of classic primal/dual subdivision by introducing a new concept named r-cochain subdivision. Such subdivision schemes map scalar coefficients on r-simplexes from the coarse to the refined mesh and converge to r-forms on the mesh. We perform convergence and smoothness analysis in an arbitrary topology setting by utilizing the techniques of matrix subdivision and the subdivision differential structure.

The other significance of our method is its preserving exactness of differential forms. We prove that exactness preserving is equivalent to the commutative relations between the subdivision schemes and the topological exterior derivative. Our construction is based on treating r- and (r+1)-cochain subdivision schemes as a pair and enforcing the commutative relations. As a result, our low-order construction recovers classic Whitney forms, while the high-order construction yields a new class of high order Whitney forms. The 1-form bases are C^1, except at irregular vertices where they are C^0. We also demonstrate extensions to three-dimensional subdivision schemes and non-simplicial meshes as well, such as quadrilaterals and octahedra.

Our construction is seamlessly integrated with surface subdivision. Once a metric is supplied, the scalar 1-form coefficients define a smooth tangent vector filed on the underlying subdivision surface. Design of tangent vector fields is made particularly easy with this machinery as we demonstrate. The subdivision r-forms can also be used as finite element bases for physical simulations on curved surfaces. We demonstrate the optimal rate of convergence in solving the Laplace and bi-Laplace equations of 1-forms.

]]>

(MS, 2006)

Abstract: THIS RECORD IS NO LONGER ACTIVE – GO TO RELATED URL FOR FILES

]]>

(MS, 2006)

Abstract: Circle Pattern is a novel method for the construction of discrete conformal mappings from surface meshes of arbitrary topology to the plane. This approach is based on representing mesh as arrangements of circles – one for each face – with prescribed intersection angles. Given these angles the circle radii follow as the unique minimizer of a convex energy. The method supports very flexible boundary conditions ranging from free boundaries to control of the boundary shape via prescribed curvatures. Closed meshes of genus zero can be parameterized over the sphere. To parameterize higher genus meshes we introduce cone singularities at designated vertices. The parameter domain is then a piecewise Euclidean surface. Cone singularities can also help to reduce the often very large area distortion of global conformal maps to moderate levels. Our method involves two optimization problems: a quadratic program and the unconstrained minimization of the circle pattern energy. The latter is a convex function of logarithmic radius variables with simple explicit expressions for gradient and Hessian. In this paper we demonstrate implementation details and possible extensions to the Circle Pattern method.

]]>

(PHD, 2005)

Abstract: This thesis introduces a novel geometry processing pipeline based on unconstrained spherical parameterization and normal remeshing. We claim three main contributions: First we show how to increase the stability of Normal Mesh construction, while speeding it up by decomposing the process into two stages: parameterization and remeshing. We show that the remeshing step can be seen as resampling under a small perturbation of the given parameterization. Based on this observation we describe a novel algorithm for efficient and stable (interpolatory) normal mesh construction via parameterization perturbation. Our second contribution is the introduction of Variational Normal Meshes. We describe a novel algorithm for encoding these meshes, and use our implementation to argue that variational normal meshes have a higher approximation quality than interpolating normal meshes, as expected. In particular we demonstrate that interpolating normal meshes have about 60 percent higher Hausdorff approximation error for the same number of vertices than our novel variational normal meshes. We also show that variational normal meshes have less aliasing artifacts than interpolatory normal meshes. The third contribution is on creating parameterizations for unstructured genus zero meshes. Previous approaches could only avoid collapses by introducing artificial constraints or continuous reprojections, which are avoided by our method. The key idea is to define upper bound energies that are still good approximations. We achieve this by dividing classical planar triangle energies by the minimum distance to the sphere center. We prove that these simple modifaction provides the desired upper bounds and are good approximations in the finite element sense. We have implemented all algorithms and provide example results and statistical data supporting our theoretical observations.

]]>

(MS, 2005)

Abstract:

Visual quality, low computational cost, and numerical stability are foremost goals in computer animation. An important ingredient in achieving these goals is the conservation of fundamental motion invariants. For example, rigid and deformable body simulation has benefited greatly from conservation of linear and angular momenta. In the case of fluids, however, none of the current techniques focuses on conserving invariants, and consequently they often introduce a visually disturbing numerical diffusion of vorticity. Visually just as important is the resolution of complex simulation domains. Doing so with regular (even if adaptive) grid techniques can be computationally delicate.

In this thesis we describe a novel technique for the simulation of fluid flows. It is designed to respect the defining differential properties, i.e., the conservation of circulation along arbitrary loops as they are transported by the flow. Consequently, our method offers several new and desirable properties: (1) arbitrary simplicial meshes (triangles in 2D, tetrahedra in 3D) can be used to define the fluid domain; (2) the computations are efficient due to discrete operators with small support; (3) the method is stable for arbitrarily large time steps; (4) it preserves discrete circulation avoiding numerical diffusion of vorticity; and (5) its implementation is straightforward.

]]>

(PHD, 2005)

Abstract:

A surface parameterization is a function that maps coordinates in a 2-dimensional parameter space to points on a surface. This thesis investigates two kinds of parameterizations for surfaces that are disc-like in shape. The first is a map from a region of the plane to the surface. The second is a mapping from one surface to another, which defines a correspondence between them. The main challenge in both cases is the construction of a smooth map with low distortion. In this thesis we present a variational approach to surface parameterization that addresses these challenges.

The first contribution of this thesis is the development of a variational framework for parameterizations. This framework encompasses the mapping of a region of the plane to a surface that is isomorphic to a disc, and the mapping between such surfaces. It is based on the rich mathematical theory built up over decades in the study of rational mechanics. Because of its roots in mechanics, our parameterizations are guaranteed to be smooth and locally bijective, and optimal parameterizations which minimize a variational energy are known to exist. A proof of existence is given for the case of optimal parameterizations in the plane.

Our second contribution is a set of algorithms to construct parameterizations for surface triangulations. We describe in detail free-boundary methods that use standard numerical optimization algorithms for the computation of optimal parameterizations. A flexible set of parameters is offered to the user to formulate preferences for the trade-off between angle, area and length distortion in parameterizations in the plane. In the specification of a correspondence between surfaces, we provide user control through feature lines which are mapped as sets onto corresponding feature lines. Additionally we allow for a partial correspondence of the surfaces which is particularly important for correlating surfaces with boundaries.

Our third contribution is an analysis of the performance of the algorithms based on our implementations. Our testing focuses on parameterizations of physically-acquired triangle mesh data. The efficiency of our methods is measured by analyzing the rate of convergence of the energy minimization, and execution times are shown to be quite reasonable. Robustness is established in the presence of large deformations in the parameterizations and the stability of our parameterizations is demonstrated under different discretizations of the surfaces.

Our fourth contribution is a set of concrete, compelling applications of surface parameterization. Non-trivial examples which draw from texture mapping, morphing and facial animation provide further evidence and insight into the versatility of our parameterization framework.

]]>

(PHD, 2004)

Abstract:

It is becoming increasingly easy to bring the body directly to digital form via stereoscopic immersive displays and tracked input devices. Is this space a viable one in which to construct 3d objects? Interfaces built upon two-dimensional displays and 2d input devices are the current standard for spatial construction, yet 3d interfaces, where the dimensionality of the interactive space matches that of the design space, have something unique to offer.

This work increases the richness of 3d interfaces by bringing several new tools into the picture: the hand is used directly to trace surfaces; tangible tongs grab, stretch, and rotate shapes; a handle becomes a lightsaber and a tool for dropping simple objects; and a raygun, analagous to the mouse, is used to select distant things. With these tools, a richer 3d interface is constructed in which a variety of objects are created by novice users with relative ease. What we see is a space, not exactly like the traditional 2d computer, but rather one in which a distinct and different set of operations is easy and natural.

Design studies, complemented by user studies, explore the larger space of three-dimensional input possibilities. The target applications are spatial arrangement, freeform shape construction, and molecular design. New possibilities for spatial construction develop alongside particular nuances of input devices and the interactions they support. Task-specific tangible controllers provide a cultural affordance which links input devices to deep histories of tool use, enhancing intuition and affective connection within an interface. On a more practical, but still emotional level, these input devices frame kinesthetic space, resulting in high-bandwidth interactions where large amounts of data can be comfortably and quickly communicated.

A crucial issue with this interface approach is the tension between specific and generic input devices. Generic devices are the tradition in computing – versatile, remappable, frequently bereft of culture or relevance to the task at hand. Specific interfaces are an emerging trend – customized, culturally rich, to date these systems have been tightly linked to a single application, limiting their widespread use. The theoretical heart of this thesis, and its chief contribution to interface research at large is an approach to customization. Instead of matching an application domain’s data, each new input device supports a functional class. The spatial construction task is split into four types of manipulation: grabbing, pointing, holding, and rubbing. Each of these action classes spans the space of spatial construction, allowing a single tool to be used in many settings without losing the unique strengths of its specific form. Outside of 3d interface, outside of spatial construction, this approach strikes a balance between generic and specific suitable for many interface scenarios.

In practice, these specific function groups are given versatility via a quick remapping technique which allows one physical tool to perform many digital tasks. For example, the handle can be quickly remapped from a lightsaber that cuts shapes to tools that place simple platonic solids, erase portions of objects, and draw double-helices in space.

The contributions of this work lie both in a theoretical model of spatial interaction, and input devices (combined with new interactions) which illustrate the efficacy of this philosophy. This research brings the new results of Tangible User Interface to the field of Virtual Reality. We find a space, in and around the hand, where full-fledged haptics are not necessary for users physically connect with digital form.

]]>

(PHD, 2003)

Abstract:

This thesis presents computational topology algorithms for discrete 2-manifolds. Although it is straightforward to compute the genus of a discrete 2-manifold, this topological invariant does not tell us enough for most computer graphics applications where we would like to know: what does the topology look like? Genus is a scalar value with no associated geometric appearance. We can, however, isolate geometric regions of the surface that are topologically interesting. The simplest topologically interesting, and perhaps most intuitive, regions to consider are those with genus equal to one. By isolating and examining such regions we can compute measures to better describe the appearance of relevant surface topology. Thus, this work focuses on isolating handles, regions with genus equal to one, in discrete 2-manifolds.

In this thesis, we present novel algorithms guaranteed to identify and isolate handles for various discrete surface representations. Additionally, we present robust techniques to measure the geometric extent of handles by identifying two locally minimal-length non-separating cycles for each handle. We also present algorithms to retain or simplify the topology of a reconstructed surface as desired. Finally, the value of these algorithms is demonstrated through specific applications to computer graphics. For example, we demonstrate how geometric models can be greatly improved through topology simplification both for models represented by volume data or by triangle meshes.

The contributions of this work include:

- A robust and efficient method for identifying and isolating handles for discrete 2-manifolds.
- A method to robustly represent the topology of the surface with an augmented Reeb graph.
- A robust method to find two locally minimal-length non-separating cycles for each handle.
- A simple method to simplify the topology for volume data and triangle meshes which preserves the local geometry as much as possible.
- An out-of-core method for topology simplification for volume data.

]]>

(PHD, 2003)

Abstract:

Finite element solvers are critical in computer graphics, engineering, medical and biological application areas. For large problems, the use of adaptive refinement methods can tame otherwise intractable computational costs. Current formulations of adaptive finite element mesh refinement seem straightforward, but their implementations prove to be a formidable task. We offer an alternative point of departure which yields equivalent adapted approximation spaces wherever the traditional mesh refinement is applicable, but proves to be significantly simpler to implement. At the same time it is much more powerful in that it is general (no special tricks are required for different types of finite elements), and applicable to novel discretizations where traditional mesh refinement concepts are not of much help, for instance on subdivision surfaces.

For classical finite-elements, adaptive refinement is typically carried out by splitting mesh elements in isolation. While so-called mesh refinement is well-understood, it is considered cumbersome to implement for unstructured three-dimensional meshes, among other settings, in particular because mesh compatibility must be explicitly maintained. Furthermore, element-splitting does not apply to problems that benefit from higher-order B-spline discretizations and their more general counterparts, so-called subdivision elements. We introduce a simple, general method for adaptive refinement which applies uniformly in all these settings and others. The basic principle of our approach is to refine basis functions, not elements. Our method is naturally compatible: unlike mesh refinement, basis refinement never creates incompatible meshes. Our contributions are (a) a minimal mathematical framework, with (b) associated algorithms for basis refinement; furthermore, we (c) describe the mapping of popular methods (finite-elements, wavelets, splines and subdivision) onto this framework, and (d) demonstrate working implementations of basis refinement with applications in graphics, engineering, and medicine.

Our approach is based on compactly supported refinable functions. We refine by augmenting the basis with narrowly-supported functions, not by splitting mesh elements in isolation. This removes a number of implementation headaches associated with element-splitting and is a general technique independent of domain dimension, element type (eg, triangle, quad, tetrahedron, hexahedron), and basis function order (piecewise linear, quadratic, cubic, etc.). The (un-)refinement algorithms are simple and require little in terms of data structure support. Many popular discretizations, including classical finite-elements, wavelets and multi-wavelets, splines and subdivision schemes may be viewed as refinable function spaces, thus they are encompassed by our approach.

Our first contribution is the specification of a minimal mathematical framework, at its heart a sequence of nested approximation spaces. By construction, the bases for these spaces consist of refinable functions. From an approximation theory point of view this is a rather trivial statement; however it has a number of very important and highly practical consequences. Our adaptive solver framework requires only that the basis functions used be refinable. It makes no assumptions as to (a) the dimension of the domain; (b) the tesselation of the domain, i.e., the domain elements be they triangles, quadrilaterals, tetrahedra, hexahedra, or more general domains; (c) the approximation smoothness or accuracy; and (d) the support diameter of the basis functions. The specification of the nested spaces structure is sufficiently weak to accomodate many practical settings, while strong enough to satisfy the necessary conditions of our theorems and algorithms.

Our second contribution is to show that basis refinement can be implemented by a small set of simple algorithms. Our method requires efficient data structures and algorithms to (a) keep track of interactions between basis functions (i.e., to find the non-zero entries in the stiffness matrix), and (b) manage a tesselation of the domain suitable for evaluation of the associated integrals (i.e., to evaluate the entries of the stiffness matrix). We provide a specification for these requirements, develop the relevant theorems and proofs, and invoke these theorems to produce concrete, provably-correct pseudo-code. The resulting algorithms, while capturing the full generality (in dimension, tesselation, smoothness, etc.) of our method, are surprisingly simple.

Our third contribution is the mapping of finite-elements, wavelets and multi-wavelets, splines and subdivision schemes onto our nested spaces framework. No single discretization fits all applications. In our survey of classical and recently-popularized discretizations we demonstrate that our unifying algorithms for basis refinement encompass a very broad range of problems.

Our fourth contribution is a set of concrete, compelling examples based on our implementation of basis refinement. Adaptive basis refinement may be profitably applied in solving partial differential equations (PDEs) useful in many application domains, including simulation, animation, modeling, rendering, surgery, biomechanics, and computer vision. Our examples span thin shells (fourth order elliptic PDE using a Loop subdivision discretization), volume deformation and stress analysis using linear elasticity (second order PDE using linear-tetrahedral and trilinear-hexahedral finite elements respectively) and a subproblem of electrocardiography (the generalized Laplace equation using linear tetrahedral finite elements).

]]>

(PHD, 1998)

Abstract:

Stationary subdivision is an important tool for generating smooth free-form surfaces used in CAGD and computer graphics. One of the challenges in the construction of subdivision schemes for arbitrary meshes is to guarantee that the surfaces produced by the algorithm are C^{1}-continuous. First results in this direction were obtained only recently. In this thesis we derive necessary and sufficient criteria for C^{k}-continuity that generalize and extend most known conditions.

We present a new method for analysis of smoothness of subdivision which allows us to analyze subdivision schemes which do not generate surfaces admitting closed-form parameterization on regular meshes, such as the Butterfly scheme and schemes with modified rules for tagged edges.

The theoretical basis for analysis of subdivision that we develop allows us to suggest methods for constructing new subdivision schemes with improved behavior. We present a new interpolating subdivision scheme based on the Butterfly scheme, which generates C^{1}-continuous surfaces from arbitrary meshes.

We describe a multiresolution representation for meshes based on subdivision. Combining subdivision and the smoothing algorithms of Taubin [61] allows us to construct a set of algorithms for interactive multiresolution editing of complex hierarchical meshes of arbitrary topology.

]]>