• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 519
  • 170
  • 72
  • 52
  • 41
  • 39
  • 21
  • 16
  • 12
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 1109
  • 1109
  • 248
  • 235
  • 200
  • 180
  • 128
  • 122
  • 122
  • 118
  • 112
  • 110
  • 96
  • 93
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Non-Adjoint Surfactant Flood Optimization of Net Present Value and Incorporation of Optimal Solution Under Geological and Economic Uncertainty

Odi, Uchenna O. 2009 December 1900 (has links)
The advent of smart well technology, which is the use of down hole sensors to adjust well controls (i.e. injection rate, bottomhole pressure, etc.), has allowed the possibility to control a field in all stages of the production. This possibility holds great promise in better managing enhanced oil recovery (EOR) processes, especially in terms of applying optimization techniques. However, some procedures for optimizing EOR processes are not based on the physics of the process, which may lead to erroneous results. In addition, optimization of EOR processes can be difficult, and limited, if there is no access to the simulator code for computation of the adjoints used for optimization. This research describes the development of a general procedure for designing an initial starting point for a surfactant flood optimization. The method does not rely on a simulator's adjoint computation or on external computing of adjoints for optimization. The reservoir simulator used for this research was Schlumberger's Eclipse 100, and optimization was accomplished through use of a program written in Matlab. Utility of the approach is demonstrated by using it to optimize the process net present value (NPV) of a 5-spot surfactant flood (320-acres) and incorporating the optimization solution into a probabilistic geological and economic setting. This thesis includes a general procedure for optimizing a surfactant flood and provides groundwork for optimizing other EOR techniques. This research is useful because it takes the optimal solution and calculates a probability of success for possible NPVs. This is very important when accessing risk in a business scenario, because projects that have unknown probability of success are most likely to be abandoned as uneconomic. This thesis also illustrates possible NPVs if the optimal solution was used.
52

Initial Member Selection and Covariance Localization Study of Ensemble Kalman Filter based Data Assimilation

Yip, Yeung 2011 May 1900 (has links)
Petroleum engineers generate reservoir simulation models to optimize production and maximize recovery. History matching is one of the methods used to calibrate the reservoir models. During traditional history matching, individual model variable parameters (permeability, relative permeability, initial water saturation, etc) are adjusted until the production history is matched using the updated reservoir model. However, this method of utilizing only one model does not help capture the full range of system uncertainty. Another drawback is that the entire model has to be matched from the initial time when matching for new observation data. Ensemble Kalman Filter (EnKF) is a data assimilation technique that has gained increasing interest in the application of petroleum history matching in recent years. The basic methodology of the EnKF consists of the forecast step and the update step. This data assimilation method utilizes a collection of state vectors, known as an ensemble, which are simulated forward in time. In other words, each ensemble member represents a reservoir model (realization). Subsequently, during the update step, the sample covariance is computed from the ensemble, while the collection of state vectors is updated using the formulations which involve this updated sample covariance. When a small ensemble size is used for a large, field-scale model, poor estimate of the covariance matrix could occur (Anderson and Anderson 1999; Devegowda and Arroyo 2006). To mitigate such problem, various covariance conditioning schemes have been proposed to improve the performance of EnKF, without the use of large ensemble sizes that require enormous computational resources. In this study, we implemented EnKF coupled with these various covariance localization schemes: Distance-based, Streamline trajectory-based, and Streamline sensitivity-based localization and Hierarchical EnKF on a synthetic reservoir field case study. We will describe the methodology of each of the covariance localization schemes with their characteristics and limitations.
53

Mining Mobile Groups from Uncertain Location Databases

Chen, Chih-Chi 21 July 2005 (has links)
As the mobile communication devices become popular, getting the location data of various objects is more convenient than before. Mobile groups that exhibit spatial and temporal proximities can be used for marketing, criminal detection, and ecological studies, just to name a few. Although nowadays the most advanced position equipments are capable of achieving a high accuracy with the measurement error less than 10 meters, they are still expensive. Positioning equipments using different technologies incur different amount of measurement errors ranging from 10 meters to a few hundred meters. In this thesis, we examine the impact of measurement errors on the accuracy of identified valid mobile groups and apply Kalman Filter and RTS smoothing as the one-way and two-way correction to correct the measurement data. In most settings, the corrected location data yield more accurate valid mobile groups. However, when the measurement error is small and users do not make abrupt change in their speed, mining mobile groups directly on the measurement data, however, yield better results.
54

Applying Kalman Filter to Estimate the OTF of a Polluted Lens in an Image System

Chiu, Hung-chin 05 September 2005 (has links)
The lenses are important elements in optical imaging systems. However, lenses are liable to defects such as dusts and thus deteriorate their imaging quality. The polluted lens can be verified equivalent to a polluted random screen set against a clean lens. In our model, the defects on random screen are assumed poison-distribution, overlapped and the transmittance effect of each defect is multiplicative. In this thesis, we will apply Kalman filter to estimate the optical transfer function for a defected imaging system. The experiments are set up by the instruments including the video camera, capture card, and personal computer. Kalman filter addresses an estimation problem defined by two models: the signal model and the observation model. Kalman filter was originally developed in the field of optimal estimation for application of controlling and tracking. Recently Kalman filter has been very often applied to the problems of image restoration. In this thesis, the signal model is obtained from a ratio of the defected and clean pictures in frequency domain. The observation model is built for an additive measurement noise from electronic sampling. Experimental results have demonstrated that the estimated optical transfer function is useful for image restoration.
55

The Use of Kalman Filter in Handling Imprecise and Missing Data for Mobile Group Mining

Hung, Tzu-yen 01 August 2006 (has links)
As the advances of communication techniques, some services related to location information came into existence successively. On such application is on finding out the mobile groups that exhibit spatial and temporal proximities called mobile group mining. Although there exists positioning devices that are capable of achieving a high accuracy with low measurement error. Many consumer-grades, inexpensive positioning devices that incurred various extent of higher measurement error are much more popular. In addition, some natural factors such as temperature, humidity, and pressure may have influences on the precision of position measurement. Worse, moving objects may sometimes become untraceable voluntarily or involuntarily. In this thesis, we extend the previous work on mobile group mining and adopt Kalman filter to correct the noisy data and predict the missing data. Several methods based on Kalman filter that correct/predict either correction data or pair-wise distance data. These methods have been evaluated using synthetic data generated using IBM City Simulator. We identify the operating regions in which each method has the best performance.
56

Using Resampling to Optimizing Continuous Queries in Wireless Sensor Networks

Liu, Pin-yu 17 July 2007 (has links)
The advances of communication and computer techniques have enabled the development of low-cost, low-power, multifunctional sensor nodes that are small in size and capable of communicating in short distances. A sensor network is composed of a large number of sensor nodes that are densely deployed either inside the phenomenon to be observed or very close to it. Sensor networks open up new opportunities to observe and interact with the physical world around us. Despite the recent advances in sensor network applications and technology, sensor networks still suffer from the major problems of limited energy. It is because most sensor nodes use battery as their energy srouce and are inconvenient and sometimes difficult to be replaced when the battery run out. Understanding the events, measures, and tasks required by certain applications has the potential to provide efficient communication techniques for the sensor network. Our focus in this work is on the efficient processing of continuous queries, by which query results have to be generated according to the sampling rate specified by the user for an extended period of time. In this thesis, we will deal with two types of continuous queries. The first type of queries requires data from all sensor nodes; while the other is only interested in the data returned by some selected nodes. To answer these queries, data have to be sent to the base station at some designated rate, which may consume much energy. Previous works have developed two methods to reduce the energy consumption. They both base on the error range which the user can tolerate to determine whether current sensing data should be transmitted. While the first uses simple cache method, the second uses complex multi-dimensional model. However, the proposed methods required the user to specify the error range, which may not be easy to specify. In addition, the sensed data reported by the sensors were assumed to be accurate, which is by no means true in the real world. This thesis is based on Kalman filter to correct and predict sensing data. As a result, the sampling frequency of each sensor is dynamically adjusted, referred to as resampling which systematically determine the data sensing/transferring rate of sensors. We evaluate our proposed methods using empirical data collected from a real sensor network.
57

Improvements to a queue and delay estimation algorithm utilized in video imaging vehicle detection systems

Cheek, Marshall Tyler 17 September 2007 (has links)
Video Imaging Vehicle Detection Systems (VIVDS) are steadily becoming the dominant method for the detection of vehicles at a signalized traffic approach. This research is intended to investigate the improvement of a queue and delay estimation algorithm (QDA), specifically the queue detection of vehicles during the red phase of a signal cycle. A previous version of the QDA used a weighted average technique that weighted previous estimates of queue length along with current measurements of queue length to produce a current estimate of queue length. The implementation of this method required some effort to calibrate, and produced a bias that inherently estimated queue lengths lower than baseline (actual) queue lengths. It was the researcher’s goal to produce a method of queue estimation during the red phase that minimized this bias, that required less calibration, yet produced an accurate estimate of queue length. This estimate of queue length was essential as many other calculations used by the QDA were dependent upon queue growth and length trends during red. The results of this research show that a linear regression method using previous queue measurements to establish a queue growth rate, plus the application of a Kalman Filter for minimizing error and controlling queue growth produced the most accurate queue estimates from the new methods attempted. This method was shown to outperform the weighted average technique used by the previous QDA during the calibration tests. During the validation tests, the linear regression technique was again shown to outperform the weighted average technique. This conclusion was supported by a statistical analysis of data and utilization of predicted vs. actual queue plots that produced desirable results supporting the accuracy of the linear regression method. A predicted vs. actual queue plot indicated that the linear regression method and Kalman Filter was capable of describing 85 percent of the variance in observed queue length data. The researcher would recommend the implementation of the linear regression method with a Kalman Filter, because this method requires little calibration, while also producing an adaptive queue estimation method that has proven to be accurate.
58

Dual-IMM System for Target Tracking and Data Fusion

Shiu, Jia-yu 30 August 2009 (has links)
In solving target tracking problems, the Kalman filter (KF) is one of the most widely used estimators. Whether the state of target movement adapts to the changes in the observations depends on the model assumptions. The interacting multiple model (IMM) algorithm uses interaction of a bank of parallel Kalman filters to solve the hypothetical model of tracking maneuvering target. Based on the function of soft switching, the IMM algorithm, with parallel Kalman filters of different dimensions, can perform well by adjusting the model weights. Nonetheless, the uncertainty in measured data and the types of sensing systems used for target tracking may still hinder the signal processing in the IMM. In order to improve the performance of target tracking and signal estimation, the concept of data fusion can be adapted in the IMM-based structures. Multiple IMM based estimators can be used in the structure of multi-sensor data fusion. In this thesis, we propose a dual-IMM estimator structure, in which data fusion of the two IMM estimators is achieved by updating associated model probabilities. Suppose that two sensors for measuring the moving target is affected by the different degrees of noise, the measured data can be processed first through two separate IMM estimators. Then, the IMM-based estimators exchange with each other the estimates, model probabilities and model transition probabilities. The dual-IMM estimator will integrate the shared data based on the proposed dual-IMM algorithm. The dual-IMM estimator can be used to avoid degraded performance of single IMM with insufficient data or undesirable environmental effects. The simulation results show that a single IMM estimator with smaller measurement noise level can be used to compensate the other IMM, which is affected by larger measurement noise. Improved overall performance from the dual-IMM estimator is obtained. Generally speaking, the two IMM estimators in the proposed structure achieve better performance when same level of measurement noise is assumed. The proposed dual-IMM estimator structure can be easily extended to multiple-IMM structure for estimation and data fusion.
59

Implementierung eines Mono-Kamera-SLAM Verfahrens zur visuell gestützten Navigation und Steuerung eines autonomen Luftschiffes

Lange, Sven 21 February 2008 (has links) (PDF)
Kamerabasierte Verfahren zur Steuerung autonomer mobiler Roboter wurden in den letzten Jahren immer populärer. In dieser Arbeit wird der Einsatz eines Stereokamerasystems und eines Mono-Kamera-SLAM Verfahrens hinsichtlich der Unterstützung der Navigation eines autonomen Luftschiffes untersucht. Mit Hilfe von Sensordaten aus IMU, GPS und Kamera wird eine Positionsschätzung über eine Sensorfusion mit Hilfe des Extended und des Unscented Kalman Filters durchgeführt.
60

Anwendung des Kalman-Filters zur Komplexitätsreduktion im Controlling /

Strukov, Urs. January 2001 (has links)
Thesis (doctoral)--Universität St. Gallen, 2001.

Page generated in 0.0387 seconds