• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1695
  • 419
  • 238
  • 214
  • 136
  • 93
  • 31
  • 26
  • 25
  • 21
  • 20
  • 15
  • 8
  • 7
  • 7
  • Tagged with
  • 3604
  • 597
  • 432
  • 363
  • 358
  • 358
  • 346
  • 326
  • 326
  • 294
  • 282
  • 255
  • 214
  • 213
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Adaptive estimation and control algorithms for certain classes of large-scale sensor and actuator uncertainties

Mercker, Travis H. 29 June 2012 (has links)
This dissertation considers the general problem of controlling dynamic systems subject to large-scale sensor and actuator uncertainties. The assumption is made that the uncertainty is limited to either pure rotation (i.e. special orthogonal matrix) or that each axis is rotated independently. Although uncertainty can appear in more general forms, this representation describes a ``net-effect'' when the ideal axes have become misaligned that is of fundamental importance to the control of numerous systems. Adaptive observers and controllers are introduced that guarantee perfect reference trajectory tracking even with the appearance of these large-scale uncertainties. The specific contributions of this dissertation are as follows: (I) the problem of rigid-body attitude tracking with vector measurements, unknown gyro bias, and unknown body inertia matrix is addressed for the first time. In this problem, the body attitude acts as unknown special orthogonal matrix (i.e. sensor uncertainty). A set of adaptive observers and an adaptive controller is presented that guarantees perfect tracking as well as convergence of the attitude and bias estimates through a Lyapunov stability analysis. (II) An adaptive observer is developed for the scenario where the control is pre-multiplied by an unknown constant scaling and rotation matrix which gives a non-affine representation of the uncertainty. The observer is shown to be convergent given a certain persistence of excitation condition on the input signal and using a smooth projection scheme on the estimate of the unknown scaling. In addition, the observer is combined with a stabilizing control to guarantee perfect tracking which establishes a separation like property. (III) The class of uncertainties where each axis of the control is independently misaligned is examined. The problem is split into studies of in-plane and out-of-plane misalignment angles given that they exhibit fundamental technical differences in establishing convergence. Where possible, rigorous stability proofs are given for a series of adaptive observers. The structure of the observers assure that the estimates do not introduce any singularities into the control problem other than those inherent from the misalignment geometry. The inherent singularities are avoided through the use of projection schemes which allow for extension to the control problem. This work represents the first significant effort to develop adaptive observers and controllers for this class of misalignments. / text
232

Tax uncertainty and real investment decisions : evidence from mergers and acquisitions

Stomberg, Bridget Marie 05 November 2013 (has links)
This study uses corporate takeovers as a setting to examine how tax uncertainty affects managers' real investment decisions. Specifically, I investigate whether uncertainty about target firms' income taxes influences takeover premiums. Drawing on theories from finance, I predict that tax uncertainty leads to increased divergence of opinion among target shareholders about target value, which in turn leads to higher takeover premiums. I also predict a positive direct association between measures of target tax uncertainty and takeover premiums because investments with tax uncertainty provide flexibility in reporting book income that bidding managers value. Consistent with both predictions, I find a positive association between divergence of target shareholder opinion about taxes and takeover premiums as well as a positive association between target tax uncertainty and takeover premiums. The association between tax uncertainty and premiums is more positive when the acquiring firm faces greater capital market pressures. Finally, all positive associations persist in recent years despite newly required financial statement disclosures of tax uncertainty. / text
233

Error analysis for radiation transport

Tencer, John Thomas 18 February 2014 (has links)
All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work. The major original contribution of this work involves the treatment of errors associated with the energy-dependence of intensity. The full spectrum correlated-k distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The Multi-Source Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a line-by-line solution on a coarse grid and a number of k-distribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases. The stochastic full spectrum k-distribution (SFSK) method is a more general approach to estimating the error in k-distribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the k-distribution method. / text
234

Impact of budget uncertainty on network-level pavement condition : a robust optimization approach

Al-Amin, Md 04 April 2014 (has links)
Highway agencies usually face budget uncertainty for pavement maintenance and rehabilitation activities due to limitation in resources and changes in government policies. Highway agencies perform maintenance planning for the pavement network commonly based on the nominal available budget without taking the variability of budget into consideration. The maintenance program based on deterministic budget consideration results in suboptimal maintenance decisions that impact the overall network conditions, if the budget falls short in some future year in the planning horizon. As a result, it is important for highway agencies to adopt maintenance and rehabilitation policies that are protected against the uncertainty in maintenance and rehabilitation budget. In this study a multi-period linear integer programming model is proposed with its robust counterpart considering uncertain maintenance and rehabilitation budget. The proposed model is able to provide a maintenance and rehabilitation program for the pavement network that results in minimal impact of budget variability on the network conditions. A case study was carried out for a network of ten pavement sections. The solution of the robust optimization model was compared to those with deterministic model. The results show that the robust optimization model is an attractive method that can minimize the effect of budget uncertainty on pavement conditions at the network level. / text
235

Evaluating nearest neighbor queries over uncertain databases

Xie, Xike., 谢希科. January 2012 (has links)
Nearest Neighbor (NN in short) queries are important in emerging applications, such as wireless networks, location-based services, and data stream applications, where the data obtained are often imprecise. The imprecision or imperfection of the data sources is modeled by uncertain data in recent research works. Handling uncertainty is important because this issue affects the quality of query answers. Although queries on uncertain data are useful, evaluating the queries on them can be costly, in terms of I/O or computational efficiency. In this thesis, we study how to efficiently evaluate NN queries on uncertain data. Given a query point q and a set of uncertain objects O, the possible nearest neighbor query returns a set of candidates which have non-zero probabilities to be the query answer. It is also interesting to ask \which region has the same set of possible nearest neighbors", and \which region has one specific object as its possible nearest neighbor". To reveal the relationship between the query space and nearest neighbor answers, we propose the UV-diagram, where the query space is split into disjoint partitions, such that each partition is associated with a set of objects. If a query point is located inside the partition, its possible nearest neighbors could be directly retrieved. However, the number of such partitions is exponential and the construction effort can be expensive. To tackle this problem, we propose an alternative concept, called UV-cell, and efficient algorithms for constructing it. The UV-cell has an irregular shape, which incurs difficulties in storage, maintenance, and query evaluation. We design an index structure, called UV-index, which is an approximated version of the UV-diagram. Extensive experiments show that the UV-index could efficiently answer different variants of NN queries, such as Probabilistic Nearest Neighbor Queries, Continuous Probabilistic Nearest Neighbor Queries. Another problem studied in this thesis is the trajectory nearest neighbor query. Here the query point is restricted to a pre-known trajectory. In applications (e.g. monitoring potential threats along a flight/vessel's trajectory), it is useful to derive nearest neighbors for all points on the query trajectory. Simple solutions, such as sampling or approximating the locations of uncertain objects as points, fails to achieve a good query quality. To handle this problem, we design efficient algorithms and optimization methods for this query. Experiments show that our solution can efficiently and accurately answer this query. Our solution is also scalable to large datasets and long trajectories. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
236

Uncertainty in proved reserves estimation by decline curve analysis

Apiwatcharoenkul, Woravut 03 February 2015 (has links)
Proved reserves estimation is a crucial process since it impacts aspects of the petroleum business. By definition of the Society of Petroleum Engineers, the proved reserves must be estimated by reliable methods that must have a chance of at least a 90 percent probability (P90) that the actual quantities recovered will equal or exceed the estimates. Decline curve analysis, DCA, is a commonly used method; which a trend is fitted to a production history and extrapolated to an economic limit for the reserves estimation. The trend is the “best estimate” line that represents the well performance, which corresponds to the 50th percentile value (P50). This practice, therefore, conflicts with the proved reserves definition. An exponential decline model is used as a base case because it forms a straight line in a rate-cum coordinate scale. Two straight line fitting methods, i.e. ordinary least square and error-in-variables are compared. The least square method works better in that the result is consistent with the Gauss-Markov theorem. In compliance with the definition, the proved reserves can be estimated by determining the 90th percentile value of the descending order data from the variance. A conventional estimation using a principal of confidence intervals is first introduced to quantify the spread, a difference between P50 and P90, from the variability of a cumulative production. Because of the spread overestimation of the conventional method, the analytical formula is derived for estimating the variance of the cumulative production. The formula is from an integration of production of rate over a period of time and an error model. The variance estimations agree with Monte Carlo simulation (MCS) results. The variance is then used further to quantify the spread with the assumption that the ultimate cumulative production is normally distributed. Hyperbolic and harmonic models are also studied. The spread discrepancy between the analytics and the MCS is acceptable. However, the results depend on the accuracy of the decline model and error used. If the decline curve changes during the estimation period the estimated spread will be inaccurate. In sensitivity analysis, the trend of the spread is similar to how uncertainty changes as the parameter changes. For instance, the spread reduces if uncertainty reduces with the changing parameter, and vice versa. The field application of the analytical solution is consistent to the assumed model. The spread depends on how much uncertainty in the data is; the higher uncertainty we assume in the data, the higher spread. / text
237

On goal-oriented error estimation and adaptivity for nonlinear systems with uncertain data and application to flow problems

Bryant, Corey Michael 09 February 2015 (has links)
The objective of this work is to develop a posteriori error estimates and adaptive strategies for the numerical solution to nonlinear systems of partial differential equations with uncertain data. Areas of application cover problems in fluid mechanics including a Bayesian model selection study of turbulence comparing different uncertainty models. Accounting for uncertainties in model parameters may significantly increase the computational time when simulating complex problems. The premise is that using error estimates and adaptively refining the solution process can reduce the cost of such simulations while preserving their accuracy within some tolerance. New insights for goal-oriented error estimation for deterministic nonlinear problems are first presented. Linearization of the adjoint problems and quantities of interest introduces higher-order terms in the error representation that are generally neglected. Their effects on goal-oriented adaptive strategies are investigated in detail here. Contributions on that subject include extensions of well-known theoretical results for linear problems to the nonlinear setting, computational studies in support of these results, and an extensive comparative study of goal-oriented adaptive schemes that do, and do not, include the higher-order terms. Approaches for goal-oriented error estimation for PDEs with uncertain coefficients have already been presented, but lack the capability of distinguishing between the different sources of error. A novel approach is proposed here, that decomposes the error estimate into contributions from the physical discretization and the uncertainty approximation. Theoretical bounds are proven and numerical examples are presented to verify that the approach identifies the predominant source of the error in a surrogate model. Adaptive strategies, that use this error decomposition and refine the approximation space accordingly, are designed and tested. All methodologies are demonstrated on benchmark flow problems: Stokes lid-driven cavity, 1D Burger’s equation, 2D incompressible flows at low Reynolds numbers. The procedure is also applied to an uncertainty quantification study of RANS turbulence models in channel flows. Adaptive surrogate models are constructed to make parameter uncertainty propagation more efficient. Using surrogate models and adaptivity in a Bayesian model selection procedure, it is shown that significant computational savings can be gained over the full RANS model while maintaining similar accuracy in the predictions. / text
238

The Design of Dynamic Calibration Procedure

Leite, Nelson Paiva Oliveira, Sousa, Lucas Benedito dos Reis 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / The execution of experimental Flight Test Campaign (FTC) provides all information required for the aircraft operation and certification. Nowadays all information gathered during a FTC is provided by the Flight Test Instrumentation System (FTI) that is basically a measurement system. Typically for all FTI parameters, the estimation of the calibration coefficients that minimizes most of systematic errors and its associated uncertainty is carried out by a Static Calibration Process. To execute this task the Brazilian Institute of Research and Flight Test (Instituto de Pesquisa e Ensaios em Voo - IPEV) developed the Sistema de Automação do Laboratório de Ensaios em Voo (SALEV©) which is fully compliant with the calibration and uncertainty expression standards. For some parameters (i.e. Static Pressure) the sensor installation particularities (i.e. Pressure tapping) introduces low pass filtering characteristics into the measurement chain. In this case the measurement accuracy will be jeopardized when executing high-dynamic test points (i.e. Spin Tests). To overcome this issue the IPEV research and development group introduced a dynamic calibration process for flight test parameters that requires the knowledge of the actual Transfer Function (TF). The problem now is to simulate an impulsive input for the TF characterization which is too complex. To solve this issue a new calibration procedure was developed and evaluated for the determination of the FTI dynamic response. SALEV© was used to simulate a step input instead of an impulse. Then filtered and unfiltered data was properly compared for the determination of the TF. Preliminary test results show satisfactory performance.
239

Modeling and optimization for disruption management

Qi, Xiangtong, 1970- 13 July 2011 (has links)
Not available / text
240

Three Perspectives on the Worth of Hydrologic Data

Kikuchi, Colin P. January 2015 (has links)
Data collection is an integral part of hydrologic investigations; yet, hydrologic data collection is costly, particularly in subsurface environments. Consequently, it is critical to target data collection efforts toward prospective data sets that will best address the questions at hand, in the context of the study. Experimental and monitoring network designs that have been carefully planned with a specific objective in mind are likely to yield information-rich data that can address critical questions of concern. Conversely, data collection undertaken without careful planning may yield datasets that contain little information relevant to the questions of concern. This dissertation research develops and presents approaches that can be used to support careful planning of hydrologic experiments and monitoring networks. Specifically, three general types of problems are considered. Under the first problem type, the objective of the hydrologic investigation is to discriminate among rival conceptual models, or among rival predictive groupings. A Bayesian methodology is presented that can be used to rank prospective datasets during the planning phases of a hydrologic investigation. Under the second problem type, the objective is to quantify the impact of existing data on reductions in parameter uncertainty. An inverse modeling approach is presented to quantify the impact of existing data on parameter uncertainty when the hydrogeologic conceptual model is uncertain. The third and final problem type focuses on data collection in a water resource management context, with the specific goal to maximize profits without imposing adverse environmental impacts. A risk-based decision support framework is developed using detailed hydrologic simulation to evaluate probabilistic constraints. This enables direct calculation of the profit gains associated with prospective reductions in system parameter uncertainty, and the possible environmental impacts of unknown bias in the system parameters.

Page generated in 0.2709 seconds