• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2013
  • 601
  • 260
  • 258
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 4099
  • 795
  • 750
  • 722
  • 715
  • 704
  • 696
  • 655
  • 566
  • 445
  • 427
  • 416
  • 398
  • 366
  • 310
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Bayesian Experimental Design Framework Applied to Complex Polymerization Processes

Nabifar, Afsaneh 26 June 2012 (has links)
The Bayesian design approach is an experimental design technique which has the same objectives as standard experimental (full or fractional factorial) designs but with significant practical benefits over standard design methods. The most important advantage of the Bayesian design approach is that it incorporates prior knowledge about the process into the design to suggest a set of future experiments in an optimal, sequential and iterative fashion. Since for many complex polymerizations prior information is available, either in the form of experimental data or mathematical models, use of a Bayesian design methodology could be highly beneficial. Hence, exploiting this technique could hopefully lead to optimal performance in fewer trials, thus saving time and money. In this thesis, the basic steps and capabilities/benefits of the Bayesian design approach will be illustrated. To demonstrate the significant benefits of the Bayesian design approach and its superiority to the currently practised (standard) design of experiments, case studies drawn from representative complex polymerization processes, covering both batch and continuous processes, are presented. These include examples from nitroxide-mediated radical polymerization of styrene (bulk homopolymerization in the batch mode), continuous production of nitrile rubber in a train of CSTRs (emulsion copolymerization in the continuous mode), and cross-linking nitroxide-mediated radical copolymerization of styrene and divinyl benzene (bulk copolymerization in the batch mode, with cross-linking). All these case studies address important, yet practical, issues in not only the study of polymerization kinetics but also, in general, in process engineering and improvement. Since the Bayesian design technique is perfectly general, it can be potentially applied to other polymerization variants or any other chemical engineering process in general. Some of the advantages of the Bayesian methodology highlighted through its application to complex polymerization scenarios are: improvements with respect to information content retrieved from process data, relative ease in changing factor levels mid-way through the experimentation, flexibility with factor ranges, overall “cost”-effectiveness (time and effort/resources) with respect to the number of experiments, and flexibility with respect to source and quality of prior knowledge (screening experiments versus models and/or combinations). The most important novelty of the Bayesian approach is the simplicity and the natural way with which it follows the logic of the sequential model building paradigm, taking full advantage of the researcher’s expertise and information (knowledge about the process or product) prior to the design, and invoking enhanced information content measures (the Fisher Information matrix is maximized, which corresponds to minimizing the variances and reducing the 95% joint confidence regions, hence improving the precision of the parameter estimates). In addition, the Bayesian analysis is amenable to a series of statistical diagnostic tests that one can carry out in parallel. These diagnostic tests serve to quantify the relative importance of the parameters (intimately related to the significance of the estimated factor effects) and their interactions, as well as the quality of prior knowledge (in other words, the adequacy of the model or the expert’s opinions used to generate the prior information, as the case might be). In all the case studies described in this thesis, the general benefits of the Bayesian design were as described above. More specifically, with respect to the most complex of the examples, namely, the cross-linking nitroxide-mediated radical polymerization (NMRP) of styrene and divinyl benzene, the investigations after designing experiments through the Bayesian approach led to even more interesting detailed kinetic and polymer characterization studies, which cover the second part of this thesis. This detailed synthesis, characterization and modeling effort, trigged by the Bayesian approach, set out to investigate whether the cross-linked polymer network synthesized under controlled radical polymerization (CRP) conditions had a more homogeneous structure compared to the network produced by regular free radical polymerization (FRP). In preparation for the identification of network homogeneity indicators based on polymer properties, cross-linking kinetics of nitroxide-mediated radical polymerization of styrene (STY) in the presence of a small amount of divinyl benzene (DVB; as the cross-linker) and N-tert-butyl-N-(2-methyl)-1-phenylpropyl)-O-(1-phenylethyl) hydroxylamine (I-TIPNO; as the unimolecular initiator) was investigated in detail and the results were contrasted with regular FRP of STY/DVB and homopolymerization of STY in the presence of I-TIPNO, as reference systems. The effect of [DVB], [I-TIPNO] and [DVB]/ [I-TIPNO] were investigated on rate, molecular weights, gel content and swelling index. In parallel to our experimental investigations, a detailed mathematical model was developed and validated with the respective experimental data. Not only did model predictions follow the general experimental trends very well but also were in good agreement with experimental observations. Pursuing our investigations for a more reliable indicator for network homogeneity, corresponding branched and cross-linked polymers were characterized. Thermo-mechanical analysis was used as an attempt to investigate the difference between polymer networks synthesized through FRP and NMRP. Results from both Differential Scanning Calorimetry (DSC) and Dynamic Mechanical Analysis (DMA) showed that at the same cross-link density and conversion level, polymer networks produced by FRP and NMRP exhibit indeed comparable structures. Overall, it was notable that a wealth of process information was generated by such a practical experimental design technique, and with minimal experimental effort compared to previous (undesigned) efforts (and associated, often not well founded, claims) in the literature!
202

A Latent Health Factor Model for Estimating Estuarine Ecosystem Health

Wu, Margaret 05 1900 (has links)
Assessment of the “health” of an ecosystem is often of great interest to those interested in monitoring and conservation of ecosystems. Traditionally, scientists have quantified the health of an ecosystem using multimetric indices that are semi-qualitative. Recently, a statistical-based index called the Latent Health Factor Index (LHFI) was devised to address many inadequacies of the conventional indices. Relying on standard modelling procedures, unlike the conventional indices, accords the LHFI many advantages: the LHFI is less arbitrary, and it allows for straightforward model inference and for formal statistical prediction of health for a new site (using only supplementary environmental covariates). In contrast, with conventional indices, formal statistical prediction does not exist, meaning that proper estimation of health for a new site requires benthic data which are expensive and time-consuming to gather. As the LHFI modelling methodology is a relatively new concept, it has so far only been demonstrated (and validated) on freshwater ecosystems. The goal of this thesis is to apply the LHFI modelling methodology to estuarine ecosystems, particularly to the previously unassessed system in Richibucto, New Brunswick. Specifically, the aims of this thesis are threefold: firstly, to investigate whether the LHFI is even applicable to estuarine systems since estuarine and freshwater metrics, or indicators of health, are quite different; secondly, to determine the appropriate form that the LHFI model if the technique is applicable; and thirdly, to assess the health of the Richibucto system. Note that the second objective includes determining which covariates may have a significant impact on estuarine health. As scientists have previously used the AZTI Marine Biotic Index (AMBI) and the Infaunal Trophic Index (ITI) as measurements of estuarine ecosystem health, this thesis investigates LHFI models using metrics from these two indices simultaneously. Two sets of models were considered in a Bayesian framework and implemented using Markov chain Monte Carlo techniques, the first using only metrics from AMBI, and the second using metrics from both AMBI and ITI. Both sets of LHFI models were successful in that they were able to make distinctions between health levels at different sites.
203

Quantitative Testing of Probabilistic Phase Unwrapping Methods

Moran, Jodi January 2001 (has links)
The reconstruction of a phase surface from the observed principal values is required for a number of applications, including synthetic aperture radar (SAR) and magnetic resonance imaging (MRI). However, the process of reconstruction, called
204

Integration in Computer Experiments and Bayesian Analysis

Karuri, Stella January 2005 (has links)
Mathematical models are commonly used in science and industry to simulate complex physical processes. These models are implemented by computer codes which are often complex. For this reason, the codes are also expensive in terms of computation time, and this limits the number of simulations in an experiment. The codes are also deterministic, which means that output from a code has no measurement error. <br /><br /> One modelling approach in dealing with deterministic output from computer experiments is to assume that the output is composed of a drift component and systematic errors, which are stationary Gaussian stochastic processes. A Bayesian approach is desirable as it takes into account all sources of model uncertainty. Apart from prior specification, one of the main challenges in a complete Bayesian model is integration. We take a Bayesian approach with a Jeffreys prior on the model parameters. To integrate over the posterior, we use two approximation techniques on the log scaled posterior of the correlation parameters. First we approximate the Jeffreys on the untransformed parameters, this enables us to specify a uniform prior on the transformed parameters. This makes Markov Chain Monte Carlo (MCMC) simulations run faster. For the second approach, we approximate the posterior with a Normal density. <br /><br /> A large part of the thesis is focused on the problem of integration. Integration is often a goal in computer experiments and as previously mentioned, necessary for inference in Bayesian analysis. Sampling strategies are more challenging in computer experiments particularly when dealing with computationally expensive functions. We focus on the problem of integration by using a sampling approach which we refer to as "GaSP integration". This approach assumes that the integrand over some domain is a Gaussian random variable. It follows that the integral itself is a Gaussian random variable and the Best Linear Unbiased Predictor (BLUP) can be used as an estimator of the integral. We show that the integration estimates from GaSP integration have lower absolute errors. We also develop the Adaptive Sub-region Sampling Integration Algorithm (ASSIA) to improve GaSP integration estimates. The algorithm recursively partitions the integration domain into sub-regions in which GaSP integration can be applied more effectively. As a result of the adaptive partitioning of the integration domain, the adaptive algorithm varies sampling to suit the variation of the integrand. This "strategic sampling" can be used to explore the structure of functions in computer experiments.
205

Bayesian Experimental Design Framework Applied to Complex Polymerization Processes

Nabifar, Afsaneh 26 June 2012 (has links)
The Bayesian design approach is an experimental design technique which has the same objectives as standard experimental (full or fractional factorial) designs but with significant practical benefits over standard design methods. The most important advantage of the Bayesian design approach is that it incorporates prior knowledge about the process into the design to suggest a set of future experiments in an optimal, sequential and iterative fashion. Since for many complex polymerizations prior information is available, either in the form of experimental data or mathematical models, use of a Bayesian design methodology could be highly beneficial. Hence, exploiting this technique could hopefully lead to optimal performance in fewer trials, thus saving time and money. In this thesis, the basic steps and capabilities/benefits of the Bayesian design approach will be illustrated. To demonstrate the significant benefits of the Bayesian design approach and its superiority to the currently practised (standard) design of experiments, case studies drawn from representative complex polymerization processes, covering both batch and continuous processes, are presented. These include examples from nitroxide-mediated radical polymerization of styrene (bulk homopolymerization in the batch mode), continuous production of nitrile rubber in a train of CSTRs (emulsion copolymerization in the continuous mode), and cross-linking nitroxide-mediated radical copolymerization of styrene and divinyl benzene (bulk copolymerization in the batch mode, with cross-linking). All these case studies address important, yet practical, issues in not only the study of polymerization kinetics but also, in general, in process engineering and improvement. Since the Bayesian design technique is perfectly general, it can be potentially applied to other polymerization variants or any other chemical engineering process in general. Some of the advantages of the Bayesian methodology highlighted through its application to complex polymerization scenarios are: improvements with respect to information content retrieved from process data, relative ease in changing factor levels mid-way through the experimentation, flexibility with factor ranges, overall “cost”-effectiveness (time and effort/resources) with respect to the number of experiments, and flexibility with respect to source and quality of prior knowledge (screening experiments versus models and/or combinations). The most important novelty of the Bayesian approach is the simplicity and the natural way with which it follows the logic of the sequential model building paradigm, taking full advantage of the researcher’s expertise and information (knowledge about the process or product) prior to the design, and invoking enhanced information content measures (the Fisher Information matrix is maximized, which corresponds to minimizing the variances and reducing the 95% joint confidence regions, hence improving the precision of the parameter estimates). In addition, the Bayesian analysis is amenable to a series of statistical diagnostic tests that one can carry out in parallel. These diagnostic tests serve to quantify the relative importance of the parameters (intimately related to the significance of the estimated factor effects) and their interactions, as well as the quality of prior knowledge (in other words, the adequacy of the model or the expert’s opinions used to generate the prior information, as the case might be). In all the case studies described in this thesis, the general benefits of the Bayesian design were as described above. More specifically, with respect to the most complex of the examples, namely, the cross-linking nitroxide-mediated radical polymerization (NMRP) of styrene and divinyl benzene, the investigations after designing experiments through the Bayesian approach led to even more interesting detailed kinetic and polymer characterization studies, which cover the second part of this thesis. This detailed synthesis, characterization and modeling effort, trigged by the Bayesian approach, set out to investigate whether the cross-linked polymer network synthesized under controlled radical polymerization (CRP) conditions had a more homogeneous structure compared to the network produced by regular free radical polymerization (FRP). In preparation for the identification of network homogeneity indicators based on polymer properties, cross-linking kinetics of nitroxide-mediated radical polymerization of styrene (STY) in the presence of a small amount of divinyl benzene (DVB; as the cross-linker) and N-tert-butyl-N-(2-methyl)-1-phenylpropyl)-O-(1-phenylethyl) hydroxylamine (I-TIPNO; as the unimolecular initiator) was investigated in detail and the results were contrasted with regular FRP of STY/DVB and homopolymerization of STY in the presence of I-TIPNO, as reference systems. The effect of [DVB], [I-TIPNO] and [DVB]/ [I-TIPNO] were investigated on rate, molecular weights, gel content and swelling index. In parallel to our experimental investigations, a detailed mathematical model was developed and validated with the respective experimental data. Not only did model predictions follow the general experimental trends very well but also were in good agreement with experimental observations. Pursuing our investigations for a more reliable indicator for network homogeneity, corresponding branched and cross-linked polymers were characterized. Thermo-mechanical analysis was used as an attempt to investigate the difference between polymer networks synthesized through FRP and NMRP. Results from both Differential Scanning Calorimetry (DSC) and Dynamic Mechanical Analysis (DMA) showed that at the same cross-link density and conversion level, polymer networks produced by FRP and NMRP exhibit indeed comparable structures. Overall, it was notable that a wealth of process information was generated by such a practical experimental design technique, and with minimal experimental effort compared to previous (undesigned) efforts (and associated, often not well founded, claims) in the literature!
206

A Latent Health Factor Model for Estimating Estuarine Ecosystem Health

Wu, Margaret 05 1900 (has links)
Assessment of the “health” of an ecosystem is often of great interest to those interested in monitoring and conservation of ecosystems. Traditionally, scientists have quantified the health of an ecosystem using multimetric indices that are semi-qualitative. Recently, a statistical-based index called the Latent Health Factor Index (LHFI) was devised to address many inadequacies of the conventional indices. Relying on standard modelling procedures, unlike the conventional indices, accords the LHFI many advantages: the LHFI is less arbitrary, and it allows for straightforward model inference and for formal statistical prediction of health for a new site (using only supplementary environmental covariates). In contrast, with conventional indices, formal statistical prediction does not exist, meaning that proper estimation of health for a new site requires benthic data which are expensive and time-consuming to gather. As the LHFI modelling methodology is a relatively new concept, it has so far only been demonstrated (and validated) on freshwater ecosystems. The goal of this thesis is to apply the LHFI modelling methodology to estuarine ecosystems, particularly to the previously unassessed system in Richibucto, New Brunswick. Specifically, the aims of this thesis are threefold: firstly, to investigate whether the LHFI is even applicable to estuarine systems since estuarine and freshwater metrics, or indicators of health, are quite different; secondly, to determine the appropriate form that the LHFI model if the technique is applicable; and thirdly, to assess the health of the Richibucto system. Note that the second objective includes determining which covariates may have a significant impact on estuarine health. As scientists have previously used the AZTI Marine Biotic Index (AMBI) and the Infaunal Trophic Index (ITI) as measurements of estuarine ecosystem health, this thesis investigates LHFI models using metrics from these two indices simultaneously. Two sets of models were considered in a Bayesian framework and implemented using Markov chain Monte Carlo techniques, the first using only metrics from AMBI, and the second using metrics from both AMBI and ITI. Both sets of LHFI models were successful in that they were able to make distinctions between health levels at different sites.
207

Automated detection of breast cancer using SAXS data and wavelet features

Erickson, Carissa Michelle 02 August 2005 (has links)
The overarching goal of this project was to improve breast cancer screening protocols first by collecting small angle x-ray scattering (SAXS) images from breast biopsy tissue, and second, by applying pattern recognition techniques as a semi-automatic screen. Wavelet based features were generated from the SAXS image data. The features were supplied to a classifier, which sorted the images into distinct groups, such as normal and tumor. <p>The main problem in the project was to find a set of features that provided sufficient separation for classification into groups of normal and tumor. In the original SAXS patterns, information useful for classification was obscured. The wavelet maps allowed new scale-based information to be uncovered from each SAXS pattern. The new information was subsequently used to define features that allowed for classification. Several calculations were tested to extract useful features from the wavelet decomposition maps. The wavelet map average intensity feature was selected as the most promising feature. The wavelet map intensity feature was improved by using pre-processing to remove the high central intensities from the SAXS patterns, and by using different wavelet bases for the wavelet decomposition. <p>The investigation undertaken for this project showed very promising results. A classification rate of 100% was achieved for distinguishing between normal samples and tumor samples. The system also showed promising results when tested on unrelated MRI data. In the future, the semi-automatic pattern recognition tool developed for this project could be automated. With a larger set of data for training and testing, the tool could be improved upon and used to assist radiologists in the detection and classification of breast lesions.
208

Conditioning graphs: practical structures for inference in bayesian networks

Grant, Kevin John 16 January 2007 (has links)
Probability is a useful tool for reasoning when faced with uncertainty. Bayesian networks offer a compact representation of a probabilistic problem, exploiting independence amongst variables that allows a factorization of the joint probability into much smaller local probability distributions.<p>The standard approach to probabilistic inference in Bayesian networks is to compile the graph into a join­tree, and perform computation over this secondary structure. While join­trees are among the most time­efficient methods of inference in Bayesian networks, they are not always appropriate for certain applications. The memory requirements of join­tree can be prohibitively large. The algorithms for computing over join­trees are large and involved, making them difficult to port to other systems or be understood by general programmers without Bayesian network expertise. <p>This thesis proposes a different method for probabilistic inference in Bayesian networks. We present a data structure called a conditioning graph, which is a run­time representation of Bayesian network inference. The structure mitigates many of the problems of join­tree inference. For example, conditioning graphs require much less space to store and compute over. The algorithm for calculating probabilities from a conditioning graph is small and basic, making it portable to virtually any architecture. And the details of Bayesian network inference are compiled away during the construction of the conditioning graph, leaving an intuitive structure that is easy to understand and implement without any Bayesian network expertise. <p>In addition to the conditioning graph architecture, we present several improvements to the model, that maintain its small and simplistic style while reducing the runtime required for computing over it. We present two heuristics for choosing variable orderings that result in shallower elimination trees, reducing the overall complexity of computing over conditioning graphs. We also demonstrate several compile and runtime extensions to the algorithm, that can produce substantial speedup to the algorithm while adding a small space constant to the implementation. We also show how to cache intermediate values in conditioning graphs during probabilistic computation, that allows conditioning graphs to perform at the same speed as standard methods by avoiding duplicate computation, at the price of more memory. The methods presented also conform to the basic style of the original algorithm. We demonstrate a novel technique for reducing the amount of required memory for caching. <p>We demonstrate empirically the compactness, portability, and ease of use of conditioning graphs. We also show that the optimizations of conditioning graphs allow competitive behaviour with standard methods in many circumstances, while still preserving its small and simple style. Finally, we show that the memory required under caching can be quite modest, meaning that conditioning graphs can be competitive with standard methods in terms of time, using a fraction of the memory.
209

Individualized selection of learning objects

Liu, Jian 15 May 2009 (has links)
Rapidly evolving Internet and web technologies and international efforts on standardization of learning object metadata enable learners in a web-based educational system ubiquitous access to multiple learning resources. It is becoming more necessary and possible to provide individualized help with selecting learning materials to make the most suitable choice among many alternatives.<p> A framework for individualized learning object selection, called Eliminating and Optimized Selection (EOS), is presented in this thesis. This framework contains a suggestion for extending learning object metadata specifications and presents an approach to selecting a short list of suitable learning objects appropriate for an individual learner in a particular learning context. The key features of the EOS approach are to evaluate the suitability of a learning object in its situated context and to refine the evaluation by using available historical usage information about the learning object. A Learning Preference Survey was conducted to discover and determine the relationships between the importance of learning object attributes and learner characteristics. Two weight models, a Bayesian Network Weight Model and a Naïve Bayes Model, were derived from the data collected in the survey. Given a particular learner, both of these models provide a set of personal weights for learning object features required by the individualized learning object selection.<p> The optimized selection approach was demonstrated and verified using simulated selections. Seventy simulated learning objects were evaluated for three simulated learners within simulated learning contexts. Both the Bayesian Network Weight Model and the Naïve Bayes Model were used in the selection of simulated learning objects. The results produced by the two algorithms were compared, and the two algorithms highly correlated each other in the domain where the testing was conducted.<p> A Learning Object Selection Study was performed to validate the learning object selection algorithms against human experts. By comparing machine selection and human experts selection, we found out that the agreement between machine selection and human experts selection is higher than agreement among the human experts alone.
210

Bayesian Techniques for Adaptive Acoustic Surveillance

Morton, Kenneth D. January 2010 (has links)
<p>Automated acoustic sensing systems are required to detect, classify and localize acoustic signals in real-time. Despite the fact that humans are capable of performing acoustic sensing tasks with ease in a variety of situations, the performance of current automated acoustic sensing algorithms is limited by seemingly benign changes in environmental or operating conditions. In this work, a framework for acoustic surveillance that is capable of accounting for changing environmental and operational conditions, is developed and analyzed. The algorithms employed in this work utilize non-stationary and nonparametric Bayesian inference techniques to allow the resulting framework to adapt to varying background signals and allow the system to characterize new signals of interest when additional information is available. The performance of each of the two stages of the framework is compared to existing techniques and superior performance of the proposed methodology is demonstrated. The algorithms developed operate on the time-domain acoustic signals in a nonparametric manner, thus enabling them to operate on other types of time-series data without the need to perform application specific tuning. This is demonstrated in this work as the developed models are successfully applied, without alteration, to landmine signatures resulting from ground penetrating radar data. The nonparametric statistical models developed in this work for the characterization of acoustic signals may ultimately be useful not only in acoustic surveillance but also other topics within acoustic sensing.</p> / Dissertation

Page generated in 0.0342 seconds