Monograph records
https://feeds.library.caltech.edu/people/Taylor-S/monograph.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenFri, 08 Dec 2023 12:48:05 +0000The Program Composition Project
https://resolver.caltech.edu/CaltechCSTR:1990.cs-tr-90-03
Authors: Chandy, K. Mani; Taylor, Stephen; Kesselman, Carl; Foster, Ian
Year: 1990
No abstract available.https://authors.library.caltech.edu/records/wemek-2ns55A Primer for Program Composition Notation
https://resolver.caltech.edu/CaltechCSTR:1990.cs-tr-90-10
Authors: Chandy, K. Mani; Taylor, Stephen
Year: 1990
This primer describes a notation for program composition. Program composition is putting programs together to get larger ones. PCN (Program Composition Notation) is a programming language that allows programmers to compose programs so that composed programs execute efficiently on uniprocessors, distributed-memory multicomputers or shared-memory multiprocessors. (Revised December 12, 1990)https://authors.library.caltech.edu/records/0779e-dms14A compiler approach to scalable concurrent program design
https://resolver.caltech.edu/CaltechCSTR:1992.cs-tr-92-07
Authors: Foster, Ian; Taylor, Stephen
Year: 1992
The programmer's most powerful tool for controlling complexity in program design is abstraction. We seek to use abstraction in the design of concurrent programs, so as to
separate design decisions concerned with decomposition, communication, synchronization, mapping, granularity, and load balancing. This paper describes programming and compiler techniques intended to facilitate this design strategy. The programming techniques are based on a core programming notation with two important properties: the ability to separate concurrent programming concerns, and extensibility with reusable programmer-defined
abstractions. The compiler techniques are based on a simple transformation system together with a set of compilation transformations and portable run-time support. The
transformation system allows programmer-defined abstractions to be defined as source-to-source transformations that convert abstractions into the core notation. The same
transformation system is used to apply compilation transformations that incrementally transform the core notation toward an abstract concurrent machine. This machine can be implemented on a variety of concurrent architectures using simple run-time support.
The transformation, compilation, and run-time system techniques have been implemented and are incorporated in a public-domain program development toolkit. This
toolkit operates on a wide variety of networked workstations, multicomputers, and shared-memory
multiprocessors. It includes a program transformer, concurrent compiler, syntax checker, debugger, performance analyzer, and execution animator. A variety of substantial
applications have been developed using the toolkit, in areas such as climate modeling and fluid dynamics.https://authors.library.caltech.edu/records/zexrd-y1267A Parabolic Theory of Load Balance
https://resolver.caltech.edu/CaltechCSTR:1993.cs-tr-93-25
Authors: Heirich, Alan; Taylor, Stephen
Year: 1993
DOI: 10.7907/Z91R6NJD
We derive analytical results for a dynamic load balancing algorithm modeled by the heat equation ut = V2u. The model is appropriate for quickly diffusing disturbances in a local region of a computational domain without affecting other parts of the domain. The algorithm is useful for problems in computational fluid dynamics which involve moving boundaries and adaptive grids implemented on mesh connected multicomputers. The algorithm preserves task locality and uses only local communication. Resulting load distributions approximate time asymptotic solutions of the heat equation. As a consequence it is possible to predict both the rate of convergence and the quality of the final load distribution. These predictions suggest that a typical imbalance on a multicomputer with over a million processors can be reduced by one order of magnitude after 105 arithmetic operations at each processor. For large n the time complexity to reduce the expected imbalance is effectively independent of n.https://authors.library.caltech.edu/records/j6w33-z1591Progress Report to the Advanced Research Projects Agency on the Scalable Concurrent Programming Project
https://resolver.caltech.edu/CaltechCSTR:1993.cs-tr-93-35
Authors: Taylor, Stephen
Year: 2001
DOI: 10.7907/Z93J3B0X
[No abstract available]https://authors.library.caltech.edu/records/h82rp-yh933A Parabolic Load Balancing Method
https://resolver.caltech.edu/CaltechCSTR:1994.cs-tr-94-13
Authors: Heirich, Alan; Taylor, Stephen
Year: 2001
DOI: 10.7907/Z94T6GCQ
This paper presents a diffusive load balancing method for scalable multicomputers. In contrast to other schemes which are probably correct the method scales to large numbers of processors with no in-crease in run time. In contrast to other schemes which are scalable the method is provably correct and the paper analyzes the rate of covergence. To control aggregate cpu idle time it can be useful to balance the load to specifiable accuracy. The method achieves arbitrary accuracy by proper consideration of numerical error and stability. This paper presents the method, proves correctness, convergence and scalability, and simulates applications to generic problems in computational fluid dynamics (CFD). The applications reveal some useful properties. The method can preserve adjacency relationships among elements of an adapting computational domain. This makes it useful for partitioning unstructured computational grids in concurrent computations. The method can execute asynchronously to balance a subportion of a domain without affecting the rest of the domain. Theory and experiment show the method is efficient on the scalable multicomputers of the present and coming years. The number of floating point operations required per processor to reduce a point disturbance by 90% is 168 on a system of 512 computers and 105 on a system of 1,000,000 computers. On a typical contemporary multicomputer [19] this requires 82.5 µs wall-clock time.https://authors.library.caltech.edu/records/y8qgh-6jk53A File System for the J-Machine
https://resolver.caltech.edu/CaltechCSTR:1993.cs-tr-93-27
Authors: Zadik, Yair; Taylor, Stephen
Year: 2001
DOI: 10.7907/Z9Z03666
[No abstract available]https://authors.library.caltech.edu/records/zz05v-gk145System Tools for the J-Machine
https://resolver.caltech.edu/CaltechCSTR:1993.cs-tr-93-12
Authors: Maskit, Daniel; Zadik, Yair; Taylor, Stephen
Year: 2001
DOI: 10.7907/Z9D798FW
[No Abstract]https://authors.library.caltech.edu/records/tj4b2-2jh12Experiences in Programming the J-Machine
https://resolver.caltech.edu/CaltechCSTR:1993.cs-tr-93-11
Authors: Maskit, Daniel; Taylor, Stephen
Year: 2001
DOI: 10.7907/Z92R3PQ7
This document summarizes experiences gained in programming the J-Machine. It is intended to provide feedback on the strengths and weaknesses of the architecture. The intent of this document is to provide useful information to system architects to assist in the design of the next generation of fine-grained multicomputers.https://authors.library.caltech.edu/records/rqz0m-we595A Development Methodology for Concurrent Programs
https://resolver.caltech.edu/CaltechCSTR:1994.cs-tr-94-16
Authors: Chow, Bryan; Fyfe, Andrew; Maskit, Daniel; Taylor, Stephen; Watts, Jarrell R.; Zadik, Yair
Year: 2001
DOI: 10.7907/S4MW2X
This paper describes a development methodology for the design of concurrent programs that provides a migration path from existing sequential C and FORTRAN programs. These programs may be executed immediately, without change, using the entire physical memory of a distributed memory machine or a network of ATM-coupled shared-memory multiprocessors. Subsequent program refinements may involve data and control decomposition together with explicit message passing to improve performance. Each step in the program development may utilize new hardware mechanisms supporting shared memory, segmentation and protection. The ideas presented in this paper are currently being implemented within the Multiflow compiler which is being targetted for the M-Machine. Although the examples we present use the C programming language, the concepts will also be available in FORTRAN.https://authors.library.caltech.edu/records/q5vny-2px19