• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • Tagged with
  • 16
  • 16
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Sensor Networks: Studies on the Variance of Estimation, Improving Event/Anomaly Detection, and Sensor Reduction Techniques Using Probabilistic Models

Chin, Philip Allen 19 July 2012 (has links)
Sensor network performance is governed by the physical placement of sensors and their geometric relationship to the events they measure. To illustrate this, the entirety of this thesis covers the following interconnected subjects: 1) graphical analysis of the variance of the estimation error caused by physical characteristics of an acoustic target source and its geometric location relative to sensor arrays, 2) event/anomaly detection method for time aggregated point sensor data using a parametric Poisson distribution data model, 3) a sensor reduction or placement technique using Bellman optimal estimates of target agent dynamics and probabilistic training data (Goode, Chin, & Roan, 2011), and 4) transforming event monitoring point sensor data into event detection and classification of the direction of travel using a contextual, joint probability, causal relationship, sliding window, and geospatial intelligence (GEOINT) method. / Master of Science
12

A framework for conducting mechanistic based reliability assessments of components operating in complex systems

Wallace, Jon Michael 02 December 2003 (has links)
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are qualitative in nature and employ system reliability and safety engineering principles for an appropriate starting point for the component reliability assessment. The most unique steps of the framework are the steps used to quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two newly developed multivariate probability tools: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary statistical information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The framework developed in this study is implemented to conduct the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. The framework, as implemented resulted in a considerable improvement to the accuracy of the part reliability assessment and an increased statistical understanding of the component failure behavior.
13

Value-informed space systems design and acquisition

Brathwaite, Joy Danielle 16 December 2011 (has links)
Investments in space systems are substantial, indivisible, and irreversible, characteristics that make them high-risk, especially when coupled with an uncertain demand environment. Traditional approaches to system design and acquisition, derived from a performance- or cost-centric mindset, incorporate little information about the spacecraft in relation to its environment and its value to its stakeholders. These traditional approaches, while appropriate in stable environments, are ill-suited for the current, distinctly uncertain and rapidly changing technical, and economic conditions; as such, they have to be revisited and adapted to the present context. This thesis proposes that in uncertain environments, decision-making with respect to space system design and acquisition should be value-based, or at a minimum value-informed. This research advances the value-centric paradigm by providing the theoretical basis, foundational frameworks, and supporting analytical tools for value assessment of priced and unpriced space systems. For priced systems, stochastic models of the market environment and financial models of stakeholder preferences are developed and integrated with a spacecraft-sizing tool to assess the system's net present value. The analytical framework is applied to a case study of a communications satellite, with market, financial, and technical data obtained from the satellite operator, Intelsat. The case study investigates the implications of the value-centric versus the cost-centric design and acquisition choices. Results identify the ways in which value-optimal spacecraft design choices are contingent on both technical and market conditions, and that larger spacecraft for example, which reap economies of scale benefits, as reflected by their decreasing cost-per-transponder, are not always the best (most valuable) choices. Market conditions and technical constraints for which convergence occurs between design choices under a cost-centric and a value-centric approach are identified and discussed. In addition, an innovative approach for characterizing value uncertainty through partial moments, a technique used in finance, is adapted to an engineering context and applied to priced space systems. Partial moments disaggregate uncertainty into upside potential and downside risk, and as such, they provide the decision-maker with additional insights for value-uncertainty management in design and acquisition. For unpriced space systems, this research first posits that their value derives from, and can be assessed through, the value of information they provide. To this effect, a Bayesian framework is created to assess system value in which the system is viewed as an information provider and the stakeholder an information recipient. Information has value to stakeholders as it changes their rational beliefs enabling them to yield higher expected pay-offs. Based on this marginal increase in expected pay-offs, a new metric, Value-of-Design (VoD), is introduced to quantify the unpriced system's value. The Bayesian framework is applied to the case of an Earth Science satellite that provides hurricane information to oil rig operators using nested Monte Carlo modeling and simulation. Probability models of stakeholders' beliefs, and economic models of pay-offs are developed and integrated with a spacecraft payload generation tool. The case study investigates the information value generated by each payload, with results pointing to clusters of payload instruments that yielded higher information value, and minimum information thresholds below which it is difficult to justify the acquisition of the system. In addition, an analytical decision tool, probabilistic Pareto fronts, is developed in the Cost-VoD trade space to provide the decision-maker with additional insights into the coupling of a system's probable value generation and its associated cost risk.
14

Modelling Losses in Flood Estimation

Ilahee, Mahbub January 2005 (has links)
Flood estimation is often required in hydrologic design and has important economic significance. For example, in Australia, the annual spending on infrastructure requiring flood estimation is of the order of $650 million ARR (I.E. Aust., 1998). Rainfall-based flood estimation techniques are most commonly adopted in practice. These require several inputs to convert design rainfalls to design floods. Of all the inputs, loss is an important one and defined as the amount of precipitation that does not appear as direct runoff. The concept of loss includes moisture intercepted by vegetation, infiltration into the soil, retention on the surface, evaporation and loss through the streambed and banks. As these loss components are dependent on topography, soils, vegetation and climate, the loss exhibits a high degree of temporal and spatial variability during the rainfall event. In design flood estimation, the simplified lumped conceptual loss models were used because of their simplicity and ability to approximate catchment runoff behaviour. In Australia, the most commonly adopted conceptual loss model is the initial losscontinuing loss model. For a specific part of the catchment, the initial loss occurs prior to the commencement of surface runoff, and can be considered to be composed of the interception loss, depression storage and infiltration that occur before the soil surface saturates. ARR (I. E. Aust., 1998) mentioned that the continuing loss is the average rate of loss throughout the remainder of the storm. At present, there is inadequate information on design losses in most parts of Australia and this is one of the greatest weaknesses in Australian flood hydrology. Currently recommended design losses are not compatible with design rainfall information in Australian Rainfall and Runoff. Also design losses for observed storms show a wide variability and it is always difficult to select an appropriate value of loss from this wide range for a particular application. Despite the wide variability of loss values, in the widely used Design Event Approach, a single value of initial and continuing losses is adopted. Because of the non-linearity in the rainfall-runoff process, this is likely to introduce a high degree of uncertainty and possible bias in the resulting flood estimates. In contrast, the Joint Probability Approach can consider probability-distributed losses in flood estimation. In ARR (I. E. Aust., 1998) it is recommended to use a constant continuing loss value in rainfall events. In this research it was observed that the continuing loss values in the rainfall events were not constant, rather than it decays with the duration of the rainfall event. The derived loss values from the 969 rainfall and streamflow events of Queensland catchments would provide better flood estimation than the recommended design loss values in ARR (I. E. Aust., 1998). In this research, both the initial and continuing losses were computed using IL-CL loss model and a single median loss value was used to estimate flood using Design Event Approach. Again both the initial and continuing losses were considered to be random variables and their probability distribution functions were determined. Hence, the research showed that the probability distributed loss values can be used for Queensland catchments in near future for better flood estimate. The research hypothesis tested was whether the new loss value for Queensland catchments provides significant improvement in design flood estimation. A total of 48 catchments, 82 pluviograph stations and 24 daily rainfall stations were selected from all over Queensland to test the research hypothesis. The research improved the recommended design loss values that will result in more precise design flood estimates. This will ultimately save millions of dollars in the construction of hydraulic infrastructures.
15

Empirical evaluation of a Markovian model in a limit order market

Trönnberg, Filip January 2012 (has links)
A stochastic model for the dynamics of a limit order book is evaluated and tested on empirical data. Arrival of limit, market and cancellation orders are described in terms of a Markovian queuing system with exponentially distributed occurrences. In this model, several key quantities can be analytically calculated, such as the distribution of times between price moves, price volatility and the probability of an upward price move, all conditional on the state of the order book. We show that the exponential distribution poorly fits the occurrences of order book events and further show that little resemblance exists between the analytical formulas in this model and the empirical data. The log-normal and Weibull distribution are suggested as replacements as they appear to fit the empirical data better.
16

The Simulation & Evaluation of Surge Hazard Using a Response Surface Method in the New York Bight

Bredesen, Michael H 01 January 2015 (has links)
Atmospheric features, such as tropical cyclones, act as a driving mechanism for many of the major hazards affecting coastal areas around the world. Accurate and efficient quantification of tropical cyclone surge hazard is essential to the development of resilient coastal communities, particularly given continued sea level trend concerns. Recent major tropical cyclones that have impacted the northeastern portion of the United States have resulted in devastating flooding in New York City, the most densely populated city in the US. As a part of national effort to re-evaluate coastal inundation hazards, the Federal Emergency Management Agency used the Joint Probability Method to re-evaluate surge hazard probabilities for Flood Insurance Rate Maps in the New York – New Jersey coastal areas, also termed the New York Bight. As originally developed, this method required many combinations of storm parameters to statistically characterize the local climatology for numerical model simulation. Even though high-performance computing efficiency has vastly improved in recent years, researchers have utilized different “Optimal Sampling” techniques to reduce the number of storm simulations needed in the traditional Joint Probability Method. This manuscript presents results from the simulation of over 350 synthetic tropical cyclones designed to produce significant surge in the New York Bight using the hydrodynamic Advanced Circulation numerical model, bypassing the need for Optimal Sampling schemes. This data set allowed for a careful assessment of joint probability distributions utilized for this area and the impacts of current assumptions used in deriving new flood-risk maps for the New York City area.

Page generated in 0.0728 seconds