• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 767
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1742
  • 355
  • 304
  • 278
  • 261
  • 243
  • 191
  • 191
  • 184
  • 182
  • 181
  • 170
  • 168
  • 168
  • 161
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Commercial scale simulations of surfactant/polymer flooding

Yuan, Changli 25 October 2012 (has links)
The depletion of oil reserves and higher oil prices has made chemical enhanced oil recovery (EOR) methods more attractive in recent years. Because of geological heterogeneity, unfavorable mobility ratio, and capillary forces, conventional oil recovery (including water flooding) leaves behind much oil in reservoir, often as much as 70% OOIP (original oil in place). Surfactant/polymer flooding targets these bypassed oil left after waterflood by reducing water mobility and oil/water interfacial tension. The complexity and uncertainty of reservoir characterization make the design and implementation of a robust and effective surfactant/polymer flooding to be quite challenging. Accurate numerical simulation prior to the field surfactant/polymer flooding is essential for a successful design and implementation of surfactant/polymer flooding. A recently developed unified polymer viscosity model was implemented into our existing polymer module within our in-house reservoir simulator, the Implicit Parallel Accurate Reservoir Simulator (IPARS). The new viscosity model is capable of simulating not only the Newtonian and shear-thinning rheology of polymer solution but also the shear-thickening behavior, which may occur near the wellbore with high injection rates when high molecular weight Partially Hydrolyzed Acrylamide (HPAM) polymers are injected. We have added a full capability of surfactant/polymer flooding to TRCHEM module of IPARS using a simplified but mechanistic and user-friendly approach for modeling surfactant/water/oil phase behavior. The features of surfactant module include: 1) surfactant component transport in porous media; 2) surfactant adsorption on the rock; 3) surfactant/oil/water phase behavior transitioned with salinity of Type II(-), Type III, and Type II(+) phase behaviors; 4) compositional microemulsion phase viscosity correlation and 5) relative permeabilities based on the trapping number. With the parallel capability of IPARS, commercial scale simulation of surfactant/polymer flooding becomes practical and affordable. Several numerical examples are presented in this dissertation. The results of surfactant/polymer flood are verified by comparing with the results obtained from UTCHEM, a three-dimensional chemical flood simulator developed at the University of Texas at Austin. The parallel capability and scalability are also demonstrated. / text
392

Modeling and synthesis of quality-energy optimal approximate adders

Miao, Jin 04 March 2013 (has links)
Recent interest in approximate computation is driven by its potential to achieve large energy savings. We formally demonstrate an optimal way to reduce energy via voltage over-scaling at the cost of errors due to timing starvation in addition. A fundamental trade-off between error frequency and error magnitude in a timing-starved adder has been identified. We introduce a formal model to prove that for signal processing applications using a quadratic signal-to-noise ratio error measure, reducing bit-wise error frequency is sub-optimal. Instead, energy-optimal approximate addition requires limiting maximum error magnitude. Intriguingly, due to possible error patterns, this is achieved by reducing carry chains significantly below what is allowed by the timing budget for a large fraction of sum bits, using an aligned, fixed internal-carry structure for higher significance bits. We further demonstrate that remaining approximation error is reduced by realization of conditional bounding (CB) logic for lower significance bits. A key contribution is the formalization of an approximate CB logic synthesis problem that produces a rich space of Pareto-optimal adders with a range of quality-energy trade-offs. We show how CB logic can be customized to result in over- and under-estimating approximate adders, and how a dithering adder that mixes them produces zero-centered error distributions, and, in accumulation, a reduced-variance error. This work demonstrates synthesized approximate adders with energy up to 60% smaller than that of a conventional timing-starved adder, where a 30% reduction is due to the superior synthesis of inexact CB logic. When used in a larger system implementing an image-processing algorithm, energy savings of 40% are possible. / text
393

Robust non-linear control through neuroevolution

Gomez, Faustino John 28 August 2008 (has links)
Not available / text
394

Towards practical fully homomorphic encryption

Alperin-Sheriff, Jacob 21 September 2015 (has links)
Fully homomorphic encryption (FHE) allows for computation of arbitrary func- tions on encrypted data by a third party, while keeping the contents of the encrypted data secure. This area of research has exploded in recent years following Gentry’s seminal work. However, the early realizations of FHE, while very interesting from a theoretical and proof-of-concept perspective, are unfortunately far too inefficient to provide any use in practice. The bootstrapping step is the main bottleneck in current FHE schemes. This step refreshes the noise level present in the ciphertexts by homomorphically evaluating the scheme’s decryption function over encryptions of the secret key. Bootstrapping is necessary in all known FHE schemes in order to allow an unlimited amount of computation, as without bootstrapping, the noise in the ciphertexts eventually grows to a point where decryption is no longer guaranteed to be correct. In this work, we present two new bootstrapping algorithms for FHE schemes. The first works on packed ciphertexts, which encrypt many bits at a time, while the second works on unpacked ciphertexts, which encrypt a single bit at a time. Our algorithms lie at the heart of the fastest currently existing implementations of fully homomorphic encryption for packed ciphertexts and for single-bit encryptions, respectively, running hundreds of times as fast for practical parameters as the previous best implementations.
395

Practical Verified Computation with Streaming Interactive Proofs

Thaler, Justin R 14 October 2013 (has links)
As the cloud computing paradigm has gained prominence, the need for verifiable computation has grown urgent. Protocols for verifiable computation enable a weak client to outsource difficult computations to a powerful, but untrusted, server. These protocols provide the client with a (probabilistic) guarantee that the server performed the requested computations correctly, without requiring the client to perform the computations herself. / Engineering and Applied Sciences
396

Application of dependence analysis and runtime data flow graph scheduling to matrix computations

Chan, Ernie W., 1982- 23 November 2010 (has links)
We present a methodology for exploiting shared-memory parallelism within matrix computations by expressing linear algebra algorithms as directed acyclic graphs. Our solution involves a separation of concerns that completely hides the exploitation of parallelism from the code that implements the linear algebra algorithms. This approach to the problem is fundamentally different since we also address the issue of programmability instead of strictly focusing on parallelization. Using the separation of concerns, we present a framework for analyzing and developing scheduling algorithms and heuristics for this problem domain. As such, we develop a theory and practice of scheduling concepts for matrix computations in this dissertation. / text
397

IMPLEMENTATION OF FILTERING BEAMFORMING ALGORITHMS FOR SONAR DEVICES USING GPU

Kamali, Shahrokh 27 June 2013 (has links)
Beamforming is a signal processing technique used in sensor arrays to direct signal transmission or reception. Beamformer combines input signals in the array to achieve constructive interference at particular angles (beams) and destructive interference for other angles. According to the following facts: 1- Beamforming can be computationally intensive, so real-time sonar beamforming algorithms in sonar devices is important. 2- Parallel computing has become a critical component of computing technology of the 1990s, and it is likely to have as much impact over the next 20 years as microprocessors have had over the past 20 [5]. 3- The high-performance computing community has been developing parallel programs for decades. These programs run on large scale, expensive computers. Only a few elite applications can justify the use of these expensive computers [2]. 4- GPU computing has the ability of parallel computing and it could be available on the personal computers. The objective of this thesis is to use Graphics Processing Unit (GPU) as real-time digital beamformer to accelerate the intensive signal processing.
398

Agent-Based Modelling of Stress and Productivity Performance in the Workplace

Page, Matthew, Page, Matthew 23 August 2013 (has links)
The ill-effects of stress due to fatigue significantly impact the welfare of individuals and consequently impact overall corporate productivity. This study introduces a simplified model of stress in the workplace using agent-based simulation. This study represents a novel contribution to the field of evolutionary computation. Agents are encoded initially using a String Representation and later expanded to multi-state Binary Decision Automata to choose between work on a base task, special project or rest. Training occurs by agents inaccurately mimicking behaviour of highly productive mentors. Stress is accumulated through working long hours thereby decreasing productivity performance of an agent. Lowest productivity agents are fired or retrained. The String representation for agents demonstrated near average performance attributed to the normally distributed tasks assigned to the string. The BDA representation was found to be highly adaptive, responding robustly to parameter changes. By reducing the number of simplifications for the model, a more accurate representation of the real world can be achieved.
399

A Case Study of A Multithreaded Buchberger Normal Form Algorithm

Linfoot, Andy James January 2006 (has links)
Groebner bases have many applications in mathematics, science, and engineering. This dissertation deals with the algorithmic aspects of computing these bases. The dissertation begins with a brief introduction of fundamental concepts about Groebner bases. Following this a discussion of various implementation issues are discussed. Much of the practical difficulties of using Groebner basis algorithms and techniques stems from the high computational complexity. It is shown that the algorithmic complexity of computing a Groebner basis primarily stems from the calculation of normal forms. This is established by studying run profiles of various computations. This leads to two options of making Groebner basis techniques more practical. They are to reduce the complexity by developing new algorithms (heuristics) or reduce running time of normal form calculations by introducing concurrency. The later approach is taken in the remainder of the dissertation where a multithreaded normal form algorithm is presented and discussed. It is shown with a simple example that the new algorithm demonstrates a speedup and scalability. The algorithm also has the advantage of being completion strategy independent. We conclude with an outline of future research involving the new algorithm.
400

Addressing connectivity challenges for mobile computing and communication

Shi, Cong 27 August 2014 (has links)
Mobile devices are increasingly being relied on for computation intensive and/or communication intensive applications that go beyond simple connectivity and demand more complex processing. This has been made possible by two trends. First, mobile devices, such as smartphones and tablets, are increasingly capable devices with processing and storage capabilities that make significant step improvements with every generation. Second, many improved connectivity options (e.g., 3G, WiFi, Bluetooth) are also available to mobile devices. In the rich computing and communication environment, it is promising but also challenging for mobile devices to take advantage of various available resources to improve the performance of mobile applications. First, with varying connectivity, remote computing resources are not always accessible to mobile devices in a predictable way. Second, given the uncertainty of connectivity and computing resources, their contention will become severe. This thesis seeks to address the connectivity challenges for mobile computing and communication. We propose a set of techniques and systems that help mobile applications to better handle the varying network connectivity in the utilization of various computation and communication resources. This thesis makes the following contributions: We design and implement Serendipity to allow a mobile device to use other encountered, albeit intermittently, mobile devices to speedup the execution of parallel applications through carefully allocating computation tasks among intermittently connected mobile devices. We design and implement IC-Cloud to enable a group of mobile devices to efficiently use the cloud computing resources for computation offloading even when the connectivity is varying or intermittent. We design and implement COSMOS to provide scalable computation offloading service to mobile devices at low cost by efficiently managing and allocating cloud computing resources. We design and implement CoAST to allow collaborative application-aware scheduling of mobile traffic to reduce the contention for bandwidth among communication-intensive applications without affecting their user experience.

Page generated in 0.0926 seconds