• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 11
  • 8
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The effects of incorporating dynamic data on estimates of uncertainty

Mulla, Shahebaz Hisamuddin 30 September 2004 (has links)
Petroleum exploration and development are capital intensive and smart economic decisions that need to be made to profitably extract oil and gas from the reservoirs. Accurate quantification of uncertainty in production forecasts will help in assessing risk and making good economic decisions. This study investigates the effect of combining dynamic data with the uncertainty in static data to see the effect on estimates of uncertainty in production forecasting. Fifty permeability realizations were generated for a reservoir in west Texas from available petrophysical data. We quantified the uncertainty in the production forecasts using a likelihood weighting method and an automatic history matching technique combined with linear uncertainty analysis. The results were compared with the uncertainty predicted using only static data. We also investigated approaches for best selecting a smaller number of models from a larger set of realizations to be history matched for quantification of uncertainty. We found that incorporating dynamic data in a reservoir model will result in lower estimates of uncertainty than considering only static data. However, incorporation of dynamic data does not guarantee that the forecasted ranges will encompass the true value. Reliability of the forecasted ranges depends on the method employed. When sampling multiple realizations of static data for history matching to quantify uncertainty, a sampling over the entire range of realization likelihoods shows larger confidence intervals and is more likely to encompass the true value for predicted fluid recoveries, as compared to selecting the best models.
2

Design and Evaluation of a Data-distributed Massively Parallel Implementation of a Global Optimization Algorithm---DIRECT

He, Jian 12 January 2008 (has links)
The present work aims at an efficient, portable, and robust design of a data-distributed massively parallel DIRECT, the deterministic global optimization algorithm widely used in multidisciplinary engineering design, biological science, and physical science applications. The original algorithm is modified to adapt to different problem scales and optimization (exploration vs.\ exploitation) goals. Enhanced with a memory reduction technique, dynamic data structures are used to organize local data, handle unpredictable memory requirements, reduce the memory usage, and share the data across multiple processors. The parallel scheme employs a multilevel functional and data parallelism to boost concurrency and mitigate the data dependency, thus improving the load balancing and scalability. In addition, checkpointing features are integrated to provide fault tolerance and hot restarts. Important algorithm modifications and design considerations are discussed regarding data structures, parallel schemes, error handling, and portability. Using several benchmark functions and real-world applications, the present work is evaluated in terms of optimization effectiveness, data structure efficiency, memory usage, parallel performance, and checkpointing overhead. Modeling and analysis techniques are used to investigate the design effectiveness and performance sensitivity under various problem structures, parallel schemes, and system settings. Theoretical and experimental results are compared for two parallel clusters with different system scale and network connectivity. An analytical bounding model is constructed to measure the load balancing performance under different schemes. Additionally, linear regression models are used to characterize two major overhead sources---interprocessor communication and processor idleness, and also applied to the isoefficiency functions in scalability analysis. For a variety of high-dimensional problems and large scale systems, the data-distributed massively parallel design has achieved reasonable performance. The results of the performance study provide guidance for efficient problem and scheme configuration. More importantly, the generalized design considerations and analysis techniques are beneficial for transforming many global search algorithms to become effective large scale parallel optimization tools. / Ph. D.
3

Řídké třídy grafů / Nowhere-dense classes of graphs

Tůma, Vojtěch January 2013 (has links)
In this thesis we study sparse classes of graphs and their properties usable for design of algorithms and data structures. Our specific focus is on the con- cepts of bounded expansion and tree-depth, developed in recent years mainly by J. Nešetřil and P. Ossona de Mendez. We first give a brief introduction to the theory as whole and survey tools and results from related areas of parametrised complexity and algorithmic model theory. The main part of the thesis, application of the theory, presents two new dynamic data structures. The first is for keeping a tree-depth decomposition of a graph, the second counts appearances of fixed subgraphs in a given graph. The time and space complexity of operations of both structures is guaranteed to be low when used for sparse graphs. 1
4

Caching dynamic data for web applications

Mahdavi, Mehregan, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Web portals are one of the rapidly growing applications, providing a single interface to access different sources (providers). The results from the providers are typically obtained by each provider querying a database and returning an HTML or XML document. Performance and in particular providing fast response time is one of the critical issues in such applications. Dissatisfaction of users dramatically increases with increasing response time, resulting in abandonment of Web sites, which in turn could result in loss of revenue by the providers and the portal. Caching is one of the key techniques that address the performance of such applications. In this work we focus on improving the performance of portal applications via caching. We discuss the limitations of existing caching solutions in such applications and introduce a caching strategy based on collaboration between the portal and its providers. Providers trace their logs, extract information to identify good candidates for caching and notify the portal. Caching at the portal is decided based on scores calculated by providers and associated with objects. We evaluate the performance of the collaborative caching strategy using simulation data. We show how providers can trace their logs and calculate cache-worthiness scores for their objects and notify the portal. We also address the issue of heterogeneous scoring policies by different providers and introduce mechanisms to regulate caching scores. We also show how portal and providers can synchronize their meta-data in order to minimize the overhead associated with collaboration for caching.
5

Design and Validation of Configurable Filter for JAS 39 Gripen Mission Planning Data

Flodin, Per January 2009 (has links)
<p>Saab Aerosystems, a part of Saab AB, has the overall responsibility for the development of the fourth generation fighter aircraft JAS 39 Gripen. When planning a mission for one or more aircrafts, a computer program called Mission Support System is used. Some of the data from the planning is then transferred to the actual aircraft. Today there are some unwanted restrictions in the planning software. One of these restrictions is about the fact that a number of parameters that controls the output from a planned mission are not configurable runtime, i.e. a reinstallation at customers location is needed to change this. The main purpose of this thesis was to propose a new design and a new framework that solves the inflexibility described above. The design should also be validated by a test implementation. A number of different designs were proposed and four of these were selected to be candidates for being implemented. An important tool used when developing the designs was the theory of design patterns. To choose one of the four a ranking system, based on both measurable metrics and non-measurable experience, was used. One design was selected to be the best and after implementing of the design it was considered to be valid. Future work can consist of rewriting all modules in the software to use the new framework.</p>
6

Dynamic Data Extraction and Data Visualization with Application to the Kentucky Mesonet

Paidipally, Anoop Rao 01 May 2012 (has links)
There is a need to integrate large-scale database, high-performance computing engines and geographical information system technologies into a user-friendly web interface as a platform for data visualization and customized statistical analysis. We present some concepts and design ideas regarding dynamic data storage and extraction by making use of open-source computing and mapping technologies. We implemented our methods to the Kentucky Mesonet automated weather mapping workflow. The main components of the work flow includes a web based interface, a robust database and computing infrastructure designed for both general users and power users such as modelers and researchers.
7

Integration of dynamic data into reservoir description using streamline approaches

He, Zhong 15 November 2004 (has links)
Integration of dynamic data is critical for reliable reservoir description and has been an outstanding challenge for the petroleum industry. This work develops practical dynamic data integration techniques using streamline approaches to condition static geological models to various kinds of dynamic data, including two-phase production history, interference pressure observations and primary production data. The proposed techniques are computationally efficient and robust, and thus well-suited for large-scale field applications. We can account for realistic field conditions, such as gravity, and changing field conditions, arising from infill drilling, pattern conversion, and recompletion, etc., during the integration of two-phase production data. Our approach is fast and exhibits rapid convergence even when the initial model is far from the solution. The power and practical applicability of the proposed techniques are demonstrated with a variety of field examples. To integrate two-phase production data, a travel-time inversion analogous to seismic inversion is adopted. We extend the method via a 'generalized travel-time' inversion to ensure matching of the entire production response rather than just a single time point while retaining most of the quasi-linear property of travel-time inversion. To integrate the interference pressure data, we propose an alternating procedure of travel-time inversion and peak amplitude inversion or pressure inversion to improve the overall matching of the pressure response. A key component of the proposed techniques is the efficient computation of the sensitivities of dynamic responses with respect to reservoir parameters. These sensitivities are calculated analytically using a single forward simulation. Thus, our methods can be orders of magnitude faster than finite-difference based numerical approaches that require multiple forward simulations. Streamline approach has also been extended to identify reservoir compartmentalization and flow barriers using primary production data in conjunction with decline type-curve analysis. The streamline 'diffusive' time of flight provides an effective way to calculate the drainage volume in 3D heterogeneous reservoirs. The flow barriers and reservoir compartmentalization are inferred based on the matching of drainage volumes from streamline-based calculation and decline type-curve analysis. The proposed approach is well-suited for application in the early stages of field development with limited well data and has been illustrated using a field example from the Gulf of Mexico.
8

INTEGRATED DECISION MAKING FOR PLANNING AND CONTROL OF DISTRIBUTED MANUFACTURING ENTERPRISES USING DYNAMIC-DATA-DRIVEN ADAPTIVE MULTI-SCALE SIMULATIONS (DDDAMS)

Celik, Nurcin January 2010 (has links)
Discrete-event simulation has become one of the most widely used analysis tools for large-scale, complex and dynamic systems such as supply chains as it can take randomness into account and address very detailed models. However, there are major challenges that are faced in simulating such systems, especially when they are used to support short-term decisions (e.g., operational decisions or maintenance and scheduling decisions considered in this research). First, a detailed simulation requires significant amounts of computation time. Second, given the enormous amount of dynamically-changing data that exists in the system, information needs to be updated wisely in the model in order to prevent unnecessary usage of computing and networking resources. Third, there is a lack of methods allowing dynamic data updates during the simulation execution. Overall, in a simulation-based planning and control framework, timely monitoring, analysis, and control is important not to disrupt a dynamically changing system. To meet this temporal requirement and address the above mentioned challenges, a Dynamic-Data-Driven Adaptive Multi-Scale Simulation (DDDAMS) paradigm is proposed to adaptively adjust the fidelity of a simulation model against available computational resources by incorporating dynamic data into the executing model, which then steers the measurement process for selective data update. To the best of our knowledge, the proposed DDDAMS methodology is one of the first efforts to present a coherent integrated decision making framework for timely planning and control of distributed manufacturing enterprises.To this end, comprehensive system architecture and methodologies are first proposed, where the components include 1) real time DDDAM-Simulation, 2) grid computing modules, 3) Web Service communication server, 4) database, 5) various sensors, and 6) real system. Four algorithms are then developed and embedded into a real-time simulator for enabling its DDDAMS capabilities such as abnormality detection, fidelity selection, fidelity assignment, and prediction and task generation. As part of the developed algorithms, improvements are made to the resampling techniques for sequential Bayesian inferencing, and their performance is benchmarked in terms of their resampling qualities and computational efficiencies. Grid computing and Web Services are used for computational resources management and inter-operable communications among distributed software components, respectively. A prototype of proposed DDDAM-Simulation was successfully implemented for preventive maintenance scheduling and part routing scheduling in a semiconductor manufacturing supply chain, where the results look quite promising.
9

Adaptive Cryptographic Access Control for Dynamic Data Sharing Environments

Kayem, ANNE 21 October 2008 (has links)
Distributed systems, characterized by their ability to ensure the execution of multiple transactions across a myriad of applications, constitute a prime platform for building Web applications. However, Web application interactions raise issues pertaining to security and performance that make manual security management both time-consuming and challenging. This thesis is a testimony to the security and performance enhancements afforded by using the autonomic computing paradigm to design an adaptive cryptographic access control framework for dynamic data sharing environments. One of the methods of enforcing cryptographic access control in these environments is to classify users into one of several groups interconnected in the form of a partially ordered set. Each group is assigned a single cryptographic key that is used for encryption/decryption. Access to data is granted only if a user holds the "correct" key, or can derive the required key from the one in their possession. This approach to access control is a good example of one that provides good security but has the drawback of reacting to changes in group membership by replacing keys, and re-encrypting the associated data, throughout the entire hierarchy. Data re-encryption is time-consuming, so, rekeying creates delays that impede performance. In order to support our argument in favor of adaptive security, we begin by presenting two cryptographic key management (CKM) schemes in which key updates affect only the class concerned or those in its sub-poset. These extensions enhance performance, but handling scenarios that require adaptability remain a challenge. Our framework addresses this issue by allowing the CKM scheme to monitor the rate at which key updates occur and to adjust resource (keys and encrypted data versions) allocations to handle future changes by anticipation rather than on demand. Therefore, in comparison to quasi-static approaches, the adaptive CKM scheme minimizes the long-term cost of key updates. Finally, since self-protecting CKM requires a lesser degree of physical intervention by a human security administrator, we consider the case of "collusion attacks" and propose two algorithms to detect as well as prevent such attacks. A complexity and security analysis show the theoretical improvements our schemes offer. Each algorithm presented is supported by a proof of concept implementation, and experimental results to show the performance improvements. / Thesis (Ph.D, Computing) -- Queen's University, 2008-10-16 16:19:46.617
10

A Quadtree-based Adaptively-refined Cartesian-grid Algorithm For Solution Of The Euler Equations

Bulgok, Murat 01 October 2005 (has links) (PDF)
A Cartesian method for solution of the steady two-dimensional Euler equations is produced. Dynamic data structures are used and both geometric and solution-based adaptations are applied. Solution adaptation is achieved through solution-based gradient information. The finite volume method is used with cell-centered approach. The solution is converged to a steady state by means of an approximate Riemann solver. Local time step is used for convergence acceleration. A multistage time stepping scheme is used to advance the solution in time. A number of internal and external flow problems are solved in order to demonstrate the efficiency and accuracy of the method.

Page generated in 0.0626 seconds