• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • Tagged with
  • 11
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Calibration-free image sensor modelling: deterministic and stochastic

Lim, Shen Hin, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2009 (has links)
This dissertation presents the calibration-free image sensor modelling process applicable for localisation, such that these are robust to changes in environment and in sensor properties. The modelling process consists of two distinct parts, which are deterministic and stochastic techniques, and is achieved using mechanistic deconvolution, where the sensor???s mechanical and electrical properties are utilised. In the deterministic technique, the sensor???s effective focal length is first estimated by known lens properties, and is used to approximate the lens system by a thick lens and its properties. The aperture stop position offset???which is one of the thick lens properties???then derives a new factor, namely calibration-free distortion effects factor, to characterise distortion effects inherent in the sensor. Using this factor and the given pan and tilt angles of an arbitrary plane of view, the corrected image data is generated. The corrected data complies with the image sensor constraints modified by the pan and tilt angles. In the stochastic technique, the stochastic focal length and distortion effects factor are first approximated, using tolerances of the mechanical and electrical properties. These are then utilised to develop the observation likelihood necessary in recursive Bayesian estimation. The proposed modelling process reduces dependency on image data, and, as a result, do not require experimental setup or calibration. An experimental setup was constructed to conduct extensive analysis on accuracy of the proposed modelling process and its robustness to changes in sensor properties and in pan and tilt angles without recalibration. This was compared with a conventional modelling process using three sensors with different specifications and achieved similar accuracy with one-seventh the number of iterations. The developed model has also shown itself to be robust and, in comparison to the conventional modelling process, reduced the errors by a factor of five. Using area coverage method and one-step lookahead as control strategies, the stochastic sensor model was applied into a recursive Bayesian estimation application and was also compared with a conventional approach. The proposed model provided better target estimation state, and also achieved higher efficiency and reliability when compared with the conventional approach.
2

Toward designing a sustainable watershed reclamation strategy

Keshta, Nader 03 November 2010
Oil sands mining results in significant disturbances to natural ecosystems when soil and overburden materials are removed and stockpiled to provide access to mined materials. The mining process must be followed by land reclamation, whereby disturbed landscapes are recovered with the intent to replicate the performance of natural watersheds. Modeling hydrological processes in reclaimed landscapes is essential to assess the hydrological performance of the reclamation strategies as well as their evolution over time, and requires a reliable and continuous source of input data. In pursuit of simulating the various hydrological processes, such as soil moisture and actual evapotranspiration, a lumped generic system dynamics watershed (GSDW) model has been developed. The validity of the proposed model has been assessed in terms of its capacity to reproduce the hydrological behaviour of both reconstructed and natural watersheds.<p> Data availability is a major challenge that constrains not only the type of models used but also their predictive ability and accuracy. This study evaluates the utility of precipitation and temperature data from the North American Regional Reanalysis (NARR) versus conventional platform data (e.g., meteorological station) for the hydrological modeling. Results indicate NARR data is a suitable alternative to local weather station data for simulating soil moisture patterns and evapotranspiration fluxes despite the high complexity involved in simulating such processes. Initially, the calibrated GSDW model was used along with available historical meteorological records, from both Environment Canada and NARR, to estimate the maximum soil moisture deficit and annual evapotranspiration fluxes. A probabilistic framework was adopted, and frequency curves of the maximum annual moisture deficit values were consequently constructed and used to assess the probability that various reconstructed and natural watersheds would provide the desired moisture demands. The study shows a tendency for the reconstructed watersheds to provide less moisture for evapotranspiration than natural systems. The probabilistic framework could be implemented to integrate information gained from mature natural watersheds (e.g., the natural system canopy) and transfer the results to newly reconstructed systems.<p> Finally, this study provided some insight into the sensitivity of soil moisture patterns and evapotranspiration to possible changes in the projected precipitation and air temperature in the 21st century. Climate scenarios were generated using daily, statistically downscaled precipitation and air temperature outputs from global climate models (CGCM3), under A2 and B1 emission scenarios, to simulate the corresponding soil moisture and evapotranspiration using the GSDW model. Study results suggest a decrease in the maximum annual moisture deficit will occur due to the expected increase in annual precipitation and air temperature patterns, whereas actual evapotranspiration and runoff are more likely to increase.
3

Toward designing a sustainable watershed reclamation strategy

Keshta, Nader 03 November 2010 (has links)
Oil sands mining results in significant disturbances to natural ecosystems when soil and overburden materials are removed and stockpiled to provide access to mined materials. The mining process must be followed by land reclamation, whereby disturbed landscapes are recovered with the intent to replicate the performance of natural watersheds. Modeling hydrological processes in reclaimed landscapes is essential to assess the hydrological performance of the reclamation strategies as well as their evolution over time, and requires a reliable and continuous source of input data. In pursuit of simulating the various hydrological processes, such as soil moisture and actual evapotranspiration, a lumped generic system dynamics watershed (GSDW) model has been developed. The validity of the proposed model has been assessed in terms of its capacity to reproduce the hydrological behaviour of both reconstructed and natural watersheds.<p> Data availability is a major challenge that constrains not only the type of models used but also their predictive ability and accuracy. This study evaluates the utility of precipitation and temperature data from the North American Regional Reanalysis (NARR) versus conventional platform data (e.g., meteorological station) for the hydrological modeling. Results indicate NARR data is a suitable alternative to local weather station data for simulating soil moisture patterns and evapotranspiration fluxes despite the high complexity involved in simulating such processes. Initially, the calibrated GSDW model was used along with available historical meteorological records, from both Environment Canada and NARR, to estimate the maximum soil moisture deficit and annual evapotranspiration fluxes. A probabilistic framework was adopted, and frequency curves of the maximum annual moisture deficit values were consequently constructed and used to assess the probability that various reconstructed and natural watersheds would provide the desired moisture demands. The study shows a tendency for the reconstructed watersheds to provide less moisture for evapotranspiration than natural systems. The probabilistic framework could be implemented to integrate information gained from mature natural watersheds (e.g., the natural system canopy) and transfer the results to newly reconstructed systems.<p> Finally, this study provided some insight into the sensitivity of soil moisture patterns and evapotranspiration to possible changes in the projected precipitation and air temperature in the 21st century. Climate scenarios were generated using daily, statistically downscaled precipitation and air temperature outputs from global climate models (CGCM3), under A2 and B1 emission scenarios, to simulate the corresponding soil moisture and evapotranspiration using the GSDW model. Study results suggest a decrease in the maximum annual moisture deficit will occur due to the expected increase in annual precipitation and air temperature patterns, whereas actual evapotranspiration and runoff are more likely to increase.
4

Methodology for Predicting Drilling Performance from Environmental Conditions

De Almeida, Jose Alejandro 2010 December 1900 (has links)
The use of statistics has been common practice within the petroleum industry for over a decade. With such a mature subject that includes specialized software and numerous articles, the challenge of this project was to introduce a duplicable method to perform deterministic regression while confirming the mathematical and actual validation of the resulting model. A five-step procedure was introduced using Statistical Analysis Software (SAS) for necessary computations to obtain a model that describes an event by analyzing the environmental variables. Since SAS may not be readily available, the code to perform the five-step methodology in R has been provided. The deterministic five-step procedure methodology may be applied to new fields with a limited amount of data. As an example case, 17 wells drilled in north central Texas were used to illustrate how to apply the methodology to obtain a deterministic model. The objective was to predict the number of days required to drill a well using environmental conditions and technical variables. Ideally, the predicted number of days would be within +/- 10% of the observed time of the drilled wells. The database created contained 58 observations from 17 wells with the descriptive variables, technical limit (referred to as estimated days), depth, bottomhole temperature (BHT), inclination (inc), mud weight (MW), fracture pressure (FP), pore pressure (PP), and the average, maximum, and minimum difference between fracture pressure minus mud weight and mud weight minus pore pressure. Step 1 created a database. Step 2 performed initial statistical regression on the original dataset. Step 3 ensured that the models were valid by performing univariate analysis. Step 4 history matched the models-response to actual observed data. Step 5 repeated the procedure until the best model had been found. Four main regression techniques were used: stepwise regression, forward selection, backward elimination, and least squares regression. Using these four regression techniques and best engineering judgment, a model was found that improved time prediction accuracy, but did not constantly result in values that were +/- 10% of the observed times. The five-step methodology to determine a model using deterministic statistics has applications in many different areas within the petroleum field. Unlike examples found in literature, emphasis has been given to the validation of the model by analysis of the model error. By focusing on the five-step procedure, the methodology may be applied within different software programs, allowing for greater usage. These two key parameters allow companies to obtain their time prediction models without the need to outsource the work and test the certainty of any chosen model.
5

Decision thresholds : cognitive limitations in sequential probabilistic decision making

Eastman, Kyler Maxwell 05 November 2012 (has links)
Psychologists have long appreciated that many real-world decisions require a balance of expediency and accuracy in gathering evidence. Often the best decisions are made when there is some lingering uncertainty. But how much? Depending on the situation, choosing the right amount of evidence can often be a fine line between making a rash decision and being indecisive. Psychologists have been reluctant to pursue studying peoples' abilities to judge the correct "threshold" for probabilistic decisions. There are two reasons for this: First, the question of a decision threshold, or "when should I stop gathering evidence?" is confounded by the larger issue of how subjects choose and integrate that evidence. Subjects may have a decision threshold that is consistently sub-optimal with respect to a model that does not consider cognitive constraints. However, subjects may actually be choosing the right amount of information given their own cognitive limitations. Second, it has been shown that people often use specific heuristics in making probabilistic decisions. In this case, defining a decision threshold would be largely dependent on the heuristic and task, thus preventing a study of decision thresholds that is widely applicable. The research presented here addresses both of these concerns. I defined a task where the ideal decision threshold is clearly defined, requiring some evidence, but not an exhaustive search. Furthermore, this threshold can be precisely manipulated by changes in the reward structure. Although it is possible to use a "sufficing" or sub-optimal heuristic, subjects are given a significant financial incentive to fully integrate as much evidence as possible. Lastly, and most importantly, a general model of people's cognitive limitations is applied to the traditional normative model. This enhancement allows a more refined study of humans' ability to place their decision threshold according to environmental conditions. / text
6

Structural Reliability: Assessing the Condition and Reliability of Casing in Compacting Reservoirs

Chantose, Prasongsit 2011 December 1900 (has links)
Casing has a higher risk of failure in a compacting reservoir than in a typical reservoir. Casing fails when reservoir compaction induces compression and shear stresses onto it. They compact as reservoir pressure depletes during production. High compaction reservoirs typically are composed of unconsolidated, overpressured rocks such as chalk, diatomite, and sandstone. Pore pressure depletion increases effective stress, which is the rock matrix stress pushing upward against overburden pressure. Effective stress may exceed rock compressive strength, inducing compaction. Wells in compacting reservoirs risk high failure and deformation rates. This project introduces the concept of structural reliability to quantify casing failure risks in compacting reservoirs. This research developed probabilistic models for casing capacities using current design methods and a reservoir compaction load using finite-element model simulations. Probabilistic models were used in creating two limit-states functions to predict casing failure: axial yielding and buckling failures. A limit-state function describes the casing condition as the casing experiences a reservoir compaction load. The limit state function is the input in component and system analyses for casing fragility and conditional probability of casing failure. Fragilities can predict casing probability of failure as reservoir pressure is depleting. Sensitivity and importance analyses are also performed to determine the importance of parameters affecting the casing reliability. Applying the knowledge produced from this research to casing design methods can improve design reliabilities and forecast the risk of casing failure in compacting reservoirs.
7

Algorithms and Methodology for Post-Manufacture Adaptation to Process Variations and Induced Noise in Deeply Scaled CMOS Technologies

Ashouei, Maryam 27 September 2007 (has links)
In the last two decades, VLSI technology scaling has spurred a rapid growth in the semiconductor industry. With CMOS device dimensions falling below 100 nm, achieving higher performance and packing more complex functionalities into digital integrated circuits have become easier. However, the scaling trend poses new challenges to design and process engineers. First, larger process parameter variations in the current technologies cause larger spread in the delay and power distribution of circuits and result in the parametric yield loss. In addition, ensuring the reliability of deep sub-micron (DSM) technologies under soft/transient errors is a significant challenge. These errors occur because of the combined effects of the atmospheric radiations and the significantly reduced noise margins of scaled technologies. This thesis focuses on addressing the issues related to the process variations and reliability in deeply scaled CMOS technologies. The objective of this research has been to develop circuit-level techniques to address process variations, transient errors, and the reliability concern. The proposed techniques can be divided into two parts. The first part addresses the process variation concern and proposes techniques to reduce the variation effects on power and performance distribution. The second part deals with the transient errors and techniques to reduce the effect of transient errors with minimum hardware or computational overhead.
8

A Gaussian Mixture Model based Level Set Method for Volume Segmentation in Medical Images

Webb, Grayson January 2018 (has links)
This thesis proposes a probabilistic level set method to be used in segmentation of tumors with heterogeneous intensities. It models the intensities of the tumor and surrounding tissue using Gaussian mixture models. Through a contour based initialization procedure samples are gathered to be used in expectation maximization of the mixture model parameters. The proposed method is compared against a threshold-based segmentation method using MRI images retrieved from The Cancer Imaging Archive. The cases are manually segmented and an automated testing procedure is used to find optimal parameters for the proposed method and then it is tested against the threshold-based method. Segmentation times, dice coefficients, and volume errors are compared. The evaluation reveals that the proposed method has a comparable mean segmentation time to the threshold-based method, and performs faster in cases where the volume error does not exceed 40%. The mean dice coefficient and volume error are also improved while achieving lower deviation.
9

<b>PROBABILISTIC ENSEMBLE MACHINE LEARNING APPROACHES FOR UNSTRUCTURED TEXTUAL DATA CLASSIFICATION</b>

Srushti Sandeep Vichare (17277901) 26 April 2024 (has links)
<p dir="ltr">The volume of big data has surged, notably in unstructured textual data, comprising emails, social media, and more. Currently, unstructured data represents over 80% of global data, the growth is propelled by digitalization. Unstructured text data analysis is crucial for various applications like social media sentiment analysis, customer feedback interpretation, and medical records classification. The complexity is due to the variability in language use, context sensitivity, and the nuanced meanings that are expressed in natural language. Traditional machine learning approaches, while effective in handling structured data, frequently fall short when applied to unstructured text data due to the complexities. Extracting value from this data requires advanced analytics and machine learning. Recognizing the challenges, we developed innovative ensemble approaches that combine the strengths of multiple conventional machine learning classifiers through a probabilistic approach. Response to the challenges , we developed two novel models: the Consensus-Based Integration Model (CBIM) and the Unified Predictive Averaging Model (UPAM).The CBIM and UPAM ensemble models were applied to Twitter (40,000 data samples) and the National Electronic Injury Surveillance System (NEISS) datasets (323,344 data samples) addressing various challenges in unstructured text analysis. The NEISS dataset achieved an unprecedented accuracy of 99.50%, demonstrating the effectiveness of ensemble models in extracting relevant features and making accurate predictions. The Twitter dataset, utilized for sentiment analysis, demonstrated a significant boost in accuracy over conventional approaches, achieving a maximum of 65.83%. The results highlighted the limitations of conventional machine learning approaches when dealing with complex, unstructured text data and the potential of ensemble models. The models exhibited high accuracy across various datasets and tasks, showcasing their versatility and effectiveness in obtaining valuable insights from unstructured text data. The results obtained extend the boundaries of text analysis and improve the field of natural language processing.</p>
10

Seismic Performance Evaluation And Economic Feasibility Of Self-Centering Concentrically Braced Frames

Dyanati Badabi, Mojtaba 07 June 2016 (has links)
No description available.

Page generated in 0.0634 seconds