• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Calibration Models and System Development for Compressive Sensing with Micromirror Arrays

Profeta, Rebecca L. January 2017 (has links)
No description available.
2

Multi-objective ROC learning for classification

Clark, Andrew Robert James January 2011 (has links)
Receiver operating characteristic (ROC) curves are widely used for evaluating classifier performance, having been applied to e.g. signal detection, medical diagnostics and safety critical systems. They allow examination of the trade-offs between true and false positive rates as misclassification costs are varied. Examination of the resulting graphs and calcu- lation of the area under the ROC curve (AUC) allows assessment of how well a classifier is able to separate two classes and allows selection of an operating point with full knowledge of the available trade-offs. In this thesis a multi-objective evolutionary algorithm (MOEA) is used to find clas- sifiers whose ROC graph locations are Pareto optimal. The Relevance Vector Machine (RVM) is a state-of-the-art classifier that produces sparse Bayesian models, but is unfor- tunately prone to overfitting. Using the MOEA, hyper-parameters for RVM classifiers are set, optimising them not only in terms of true and false positive rates but also a novel measure of RVM complexity, thus encouraging sparseness, and producing approximations to the Pareto front. Several methods for regularising the RVM during the MOEA train- ing process are examined and their performance evaluated on a number of benchmark datasets demonstrating they possess the capability to avoid overfitting whilst producing performance equivalent to that of the maximum likelihood trained RVM. A common task in bioinformatics is to identify genes associated with various genetic conditions by finding those genes useful for classifying a condition against a baseline. Typ- ically, datasets contain large numbers of gene expressions measured in relatively few sub- jects. As a result of the high dimensionality and sparsity of examples, it can be very easy to find classifiers with near perfect training accuracies but which have poor generalisation capability. Additionally, depending on the condition and treatment involved, evaluation over a range of costs will often be desirable. An MOEA is used to identify genes for clas- sification by simultaneously maximising the area under the ROC curve whilst minimising model complexity. This method is illustrated on a number of well-studied datasets and ap- plied to a recent bioinformatics database resulting from the current InChianti population study. Many classifiers produce “hard”, non-probabilistic classifications and are trained to find a single set of parameters, whose values are inevitably uncertain due to limited available training data. In a Bayesian framework it is possible to ameliorate the effects of this parameter uncertainty by averaging over classifiers weighted by their posterior probabil- ity. Unfortunately, the required posterior probability is not readily computed for hard classifiers. In this thesis an Approximate Bayesian Computation Markov Chain Monte Carlo algorithm is used to sample model parameters for a hard classifier using the AUC as a measure of performance. The ability to produce ROC curves close to the Bayes op- timal ROC curve is demonstrated on a synthetic dataset. Due to the large numbers of sampled parametrisations, averaging over them when rapid classification is needed may be impractical and thus methods for producing sparse weightings are investigated.
3

IMAGE-BASED MODELING AND PREDICTION OF NON-STATIONARY GROUND MOTIONS

DAK HAZIRBABA, YILDIZ 01 May 2015 (has links)
Nonlinear dynamic analysis is a required step in seismic performance evaluation of many structures. Performing such an analysis requires input ground motions, which are often obtained through simulations, due to the lack of sufficient records representing a given scenario. As seismic ground motions are characterized by time-varying amplitude and frequency content, and the response of nonlinear structures is sensitive to the temporal variations in the seismic energy input, ground motion non-stationarities should be taken into account in simulations. This paper describes a nonparametric approach for modeling and prediction of non-stationary ground motions. Using Relevance Vector Machines, a regression model which takes as input a set of seismic predictors, and produces as output the expected evolutionary power spectral density, conditioned on the predictors. A demonstrative example is presented, where recorded and predicted ground motions are compared in time, frequency, and time-frequency domains. Analysis results indicate reasonable match between the recorded and predicted quantities.
4

Remotely Sensed Data Assimilation Technique to Develop Machine Learning Models for Use in Water Management

Zaman, Bushra 01 May 2010 (has links)
Increasing population and water conflicts are making water management one of the most important issues of the present world. It has become absolutely necessary to find ways to manage water more efficiently. Technological advancement has introduced various techniques for data acquisition and analysis, and these tools can be used to address some of the critical issues that challenge water resource management. This research used learning machine techniques and information acquired through remote sensing, to solve problems related to soil moisture estimation and crop identification on large spatial scales. In this dissertation, solutions were proposed in three problem areas that can be important in the decision making process related to water management in irrigated systems. A data assimilation technique was used to build a learning machine model that generated soil moisture estimates commensurate with the scale of the data. The research was taken further by developing a multivariate machine learning algorithm to predict root zone soil moisture both in space and time. Further, a model was developed for supervised classification of multi-spectral reflectance data using a multi-class machine learning algorithm. The procedure was designed for classifying crops but the model is data dependent and can be used with other datasets and hence can be applied to other landcover classification problems. The dissertation compared the performance of relevance vector and the support vector machines in estimating soil moisture. A multivariate relevance vector machine algorithm was tested in the spatio-temporal prediction of soil moisture, and the multi-class relevance vector machine model was used for classifying different crop types. It was concluded that the classification scheme may uncover important data patterns contributing greatly to knowledge bases, and to scientific and medical research. The results for the soil moisture models would give a rough idea to farmers/irrigators about the moisture status of their fields and also about the productivity. The models are part of the framework which is devised in an attempt to provide tools to support irrigation system operational decisions. This information could help in the overall improvement of agricultural water management practices for large irrigation systems. Conclusions were reached based on the performance of these machines in estimating soil moisture using remotely sensed data, forecasting spatial and temporal variation of soil moisture and data classification. These solutions provide a new perspective to problem–solving techniques by introducing new methods that have never been previously attempted.
5

Bayesian Data-Driven Models for Irrigation Water Management

Torres-Rua, Alfonso F. 01 August 2011 (has links)
A crucial decision in the real-time management of today’s irrigation systems involves the coordination of diversions and delivery of water to croplands. Since most irrigation systems experience significant lags between when water is diverted and when it should be delivered, an important technical innovation in the next few years will involve improvements in short-term irrigation demand forecasting. The main objective of the researches presented was the development of these critically important models: (1) potential evapotranspiration forecasting; (2) hydraulic model error correction; and (3) estimation of aggregate water demands. These tools are based on statistical machine learning or data-driven modeling. These, of wide application in several areas of engineering analysis, can be used in irrigation and system management to provide improved and timely information to water managers. The development of such models is based on a Bayesian data-driven algorithm called the Relevance Vector Machine (RVM), and an extension of it, the Multivariate Relevance Vector Machine (MVRVM). The use of these types of learning machines has the advantage of avoidance of model overfitting, high robustness in the presence of unseen data, and uncertainty estimation for the results (error bars). The models were applied in an irrigation system located in the Lower Sevier River Basin near Delta, Utah. For the first model, the proposed method allows for estimation of future crop water demand values up to four days in advance. The model uses only daily air temperatures and the MVRVM as mapping algorithm. The second model minimizes the lumped error occurring in hydraulic simulation models. The RVM is applied as an error modeler, providing estimations of the occurring errors during the simulation runs. The third model provides estimation of future water releases for an entire agricultural area based on local data and satellite imagery up to two days in advance. The results obtained indicate the excellent adequacy in terms of accuracy, robustness, and stability, especially in the presence of unseen data. The comparison provided against another data-driven algorithm, of wide use in engineering, the Multilayer Perceptron, further validates the adequacy of use of the RVM and MVRVM for these types of processes.

Page generated in 0.1176 seconds