• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 10
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 12
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algorithms for economic storage and uniform generation of graphs

Pu, Ida Mengyi January 1997 (has links)
No description available.
2

Special Cases of Carry Propagation

Izsak, Alexander 01 May 2007 (has links)
The average time necessary to add numbers by local parallel computation is directly related to the length of the longest carry propagation chain in the sum. The mean length of longest carry propagation chain when adding two independent uniform random n bit numbers is a well studied topic, and useful approximations as well as an exact expression for this value have been found. My thesis searches for similar formulas for mean length of the longest carry propagation chain in sums that arise when a random n-digit number is multiplied by a number of the form 1 + 2d. Letting Cn, d represent the desired mean, my thesis details how to find formulas for Cn,d using probability, generating functions and linear algebra arguments. I also find bounds on Cn,d to prove that Cn,d = log2 n + O(1), and show work towards finding an even more exact approximation for Cn,d.
3

Design and Implementation of Thread-Level Speculation in JavaScript Engines

Martinsen, Jan Kasper January 2014 (has links)
Two important trends in computer systems are that applications are moved to the Internet as web applications, and that computer systems are getting an increasing number of cores to increase the performance. It has been shown that JavaScript in web applications has a large potential for parallel execution despite the fact that JavaScript is a sequential language. In this thesis, we show that JavaScript execution in web applications and in benchmarks are fundamentally different and that an effect of this is that Just-in-time compilation does often not improve the execution time, but rather increases the execution time for JavaScript in web applications. Since there is a significant potential for parallel computation in JavaScript for web applications, we show that Thread-Level Speculation can be used to take advantage of this in a manner completely transparent to the programmer. The Thread-Level Speculation technique is very suitable for improving the performance of JavaScript execution in web applications; however we observe that the memory overhead can be substantial. Therefore, we propose several techniques for adaptive speculation as well as for memory reduction. In the last part of this thesis we show that Just-in-time compilation and Thread-Level Speculation are complementary techniques. The execution characteristics of JavaScript in web applications are very suitable for combining Just-in-time compilation and Thread-Level Speculation. Finally, we show that Thread-Level Speculation and Just-in-time compilation can be combined to reduce power usage on embedded devices.
4

Development of Apple Workgroup Cluster and Parallel Computing for Phase Field Model of Magnetic Materials

Huang, Yongxin 16 January 2010 (has links)
Micromagnetic modeling numerically solves magnetization evolution equation to process magnetic domain analysis, which helps to understand the macroscopic magnetic properties of ferromagnets. To apply this method in simulation of magnetostrictive ferromagnets, there exist two main challenges: the complicated microelasticity due to the magnetostrictive strain, and very expensive computation mainly caused by the calculation of long-range magnetostatic and elastic interactions. A parallel computing for phase field model based on computer cluster is then developed as a promising tool for domain analysis in magnetostrictive ferromagnetic materials. We have successfully built an 8-node Apple workgroup cluster, deploying the hardware system and configuring the software environment, as a platform for parallel computation of phase field model of magnetic materials. Several testing programs have been implemented to evaluate the performance of the cluster system, especially for the application of parallel computation using MPI. The results show the cluster system can simultaneously support up to 32 processes for MPI program with high performance of interprocess communication. The parallel computations of phase field model of magnetic materials implemented by a MPI program have been performed on the developed cluster system. The simulated results of a single domain rotation in Terfenol-D crystals agree well with the theoretical prediction. A further simulation including magnetic and elastic interaction among multiple domains shows that we need take into account the interaction effects in order to accurately characterize the magnetization processes in Terfenol-D. These simulation examples suggest that the paralleling computation of the phase field model of magnetic materials based on a powerful cluster system is a promising technology that meets the need of domain analysis.
5

Improved integral equation methods for transient wave scattering

Lee, Byoung Hwa January 1996 (has links)
No description available.
6

Parallel Distributed Processing of Realtime Telemetry Data

Murphy, Donald P. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California / An architecture is described for Processing Multiple digital PCM telemetry streams. This architecture is implemented using a collection of Motorola mono-board microprocessor units (MPUs) in a single chassis called an Intermediate Processing Unit (IPU). Multiple IPUs can be integrated using a common input data bus. Each IPU is capable of processing a single PCM digital telemetry stream. Processing, in this context, includes conversion of raw sample count data to engineering units; computation of derived quantities from measurement sample data; calculation of minimum, maximum, average and cyclic [(maximum - minimum)/2] values for both measurement and derived data over a preselected time interval; out-of-limit, dropout and wildpoint detection; strip chart recording of selected data; transmission of both measurement and derived data to a high-speed, large-capacity disk storage subsystem; and transmission of compressed data to the host computer for realtime processing and display. All processing is done in realtime with at most two PCM major frames time latency.
7

BACKWARD PROPAGATION BASED ALGORITHMS FOR HIGH-PERFORMANCE IMAGE FORMATION

Lee, Hua, Lockwood, Stephanie, Tandon, James, Brown, Andrew 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / In this paper, we present the recent results of theoretical development and software implementation of a complete collection of high-performance image reconstruction algorithms designed for high-resolution imaging for various data acquisition configurations.
8

Simulations of subsurface multiphase flow including polymer flooding in oil reservoirs and infiltration in vadose zone

Yuan, Changli 31 August 2010 (has links)
With the depletion of oil reserves and increase in oil price, the enhanced oil recovery methods such as polymer flooding to increase oil production from water flooded fields are becoming more attractive. Effective design of these processes is challenging because the polymer chemistry has a strong effect on reaction and fluid rheology, which in turn has a strong effect on fluid transport. We have implemented a well-established polymer model within the Implicit Parallel Accurate Reservoir Simulator (IPARS), which enables parallel simulation of non-Newtonian fluid flow through porous media. The following properties of polymer solution are modeled in this work: 1) polymer adsorption; 2) polymer viscosity as a function of salinity, hardness, polymer concentration, and shear rate; 3) permeability reduction; 4) inaccessible pore volume. IPARS enables field-scale polymer flooding simulation with its parallel computation capability. In this thesis, several numerical examples are presented. The result of polymer module is verified by UTCHEM, a three-dimensional chemical flood simulator developed at the University of Texas at Austin. The parallel capability is also tested. The influence of different shear rate calculations is investigated in homogeneous and heterogeneous reservoirs. We observed that the wellbore velocity calculation instead of Darcy velocity reduces the grid effect for coarse mesh. We noted that the injection bottom hole pressure is very sensitive to the shear rate calculation. However, cumulative oil recovery and overall oil saturation appear to not be sensitive to grid and shear rate calculation for same reservoir. There are two models to model the ground water infiltration in vadose zone. One is Richard’s Equation (RE) model. And the other is two-phase flow model. In this work, we compare the two-phase model with an RE model to ascertain, under common scenarios such as infiltration or injection of water into initially dry soils, the similarities and differences in solutions behaviors, the ability of each model to simulate such infiltration processes under realistic scenarios, and to investigate the numerical efficiencies and difficulties which arise in these models. Six different data sets were assembled as benchmark infiltration problems in the unsaturated zone. The comparison shows that two-phase model holds for general porous media and is not limited by several assumptions that must be made for the RE formulation, while RE is applicable only for shallow regions (vadose) that are only several meters in depth and a fully saturated bottom boundary condition must be assumed. / text
9

Robust 2-D Model-Based Object Recognition

Cass, Todd A. 01 May 1988 (has links)
Techniques, suitable for parallel implementation, for robust 2D model-based object recognition in the presence of sensor error are studied. Models and scene data are represented as local geometric features and robust hypothesis of feature matchings and transformations is considered. Bounds on the error in the image feature geometry are assumed constraining possible matchings and transformations. Transformation sampling is introduced as a simple, robust, polynomial-time, and highly parallel method of searching the space of transformations to hypothesize feature matchings. Key to the approach is that error in image feature measurement is explicitly accounted for. A Connection Machine implementation and experiments on real images are presented.
10

Valuing Hedge Fund Fees

Xiao, Li January 2006 (has links)
This thesis applies a Partial Integral Differential Equation model, along with a Monte Carlo approach to quantitatively analyze the no arbitrage value of hedge fund performance fees. From a no-arbitrage point of view, the investor in a hedge fund is providing a free option to the manager of the hedge fund. The no-arbitrage value of this option can be locked in by the hedge fund manager using a simple hedging strategy. Interpolation methods, grid construction techniques and parallel computation techniques are discussed to improve the performance of the numerical methods for valuing this option.

Page generated in 0.1133 seconds