• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2009
  • 601
  • 260
  • 258
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 4095
  • 795
  • 750
  • 722
  • 715
  • 704
  • 696
  • 654
  • 566
  • 443
  • 427
  • 416
  • 397
  • 365
  • 310
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Computational Modeling of Cancer Progression

Shahrabi Farahani, Hossein January 2013 (has links)
Cancer is a multi-stage process resulting from accumulation of genetic mutations. Data obtained from assaying a tumor only contains the set of mutations in the tumor and lacks information about their temporal order. Learning the chronological order of the genetic mutations is an important step towards understanding the disease. The probability of introduction of a mutation to a tumor increases if certain mutations that promote it, already happened. Such dependencies induce what we call the monotonicity property in cancer progression. A realistic model of cancer progression should take this property into account. In this thesis, we present two models for cancer progression and algorithms for learning them. In the first model, we propose Progression Networks (PNs), which are a special class of Bayesian networks. In learning PNs the issue of monotonicity is taken into consideration. The problem of learning PNs is reduced to Mixed Integer Linear Programming (MILP), which is a NP-hard problem for which very good heuristics exist. We also developed a program, DiProg, for learning PNs. In the second model, the problem of noise in the biological experiments is addressed by introducing hidden variable. We call this model Hidden variable Oncogenetic Network (HON). In a HON, there are two variables assigned to each node, a hidden variable that represents the progression of cancer to the node and an observable random variable that represents the observation of the mutation corresponding to the node. We devised a structural Expectation Maximization (EM) algorithm for learning HONs. In the M-step of the structural EM algorithm, we need to perform a considerable number of inference tasks. Because exact inference is tractable only on Bayesian networks with bounded treewidth, we also developed an algorithm for learning bounded treewidth Bayesian networks by reducing the problem to a MILP. Our algorithms performed well on synthetic data. We also tested them on cytogenetic data from renal cell carcinoma. The learned progression networks from both algorithms are in agreement with the previously published results. MicroRNAs are short non-coding RNAs that are involved in post transcriptional regulation. A-to-I editing of microRNAs converts adenosine to inosine in the double stranded RNA. We developed a method for determining editing levels in mature microRNAs from the high-throughput RNA sequencing data from the mouse brain. Here, for the first time, we showed that the level of editing increases with development. / <p>QC 20130503</p>
192

Combining measurements with deterministic model outputs: predicting ground-level ozone

Liu, Zhong 05 1900 (has links)
The main topic of this thesis is how to combine model outputs from deterministic models with measurements from monitoring stations for air pollutants or other meteorological variables. We consider two different approaches to address this particular problem. The first approach is by using the Bayesian Melding (BM) model proposed by Fuentes and Raftery (2005). We successfully implement this model and conduct several simulation studies to examine the performance of this model in different scenarios. We also apply the melding model to the ozone data to show the importance of using the Bayesian melding model to calibrate the model outputs. That is, to adjust the model outputs for the prediction of measurements. Due to the Bayesian framework of the melding model, we can extend it to incorporate other components such as ensemble models, reversible jump MCMC for variable selection. However, the BM model is purely a spatial model and we generally have to deal with space-time dataset in practice. The deficiency of the BM approach leads us to a second approach, an alternative to the BM model, which is a linear mixed model (different from most linear mixed models, the random effects being spatially correlated) with temporally and spatially correlated residuals. We assume the spatial and temporal correlation are separable and use an AR process to model the temporal correlation. We also develop a multivariate version of this model. Both the melding model and linear mixed model are Bayesian hierarchical models, which can better estimate the uncertainties of the estimates and predictions.
193

Multivariate Bayesian Process Control

Yin, Zhijian 01 August 2008 (has links)
Multivariate control charts are valuable tools for multivariate statistical process control (MSPC) used to monitor industrial processes and to detect abnormal process behavior. It has been shown in the literature that Bayesian control charts are optimal tools to control the process compared with the non-Bayesian charts. To use any control chart, three control chart parameters must be specified, namely the sample size, the sampling interval and the control limit. Traditionally, control chart design is based on its statistical performance. Recently, industrial practitioners and academic researchers have increasingly recognized the cost benefits obtained by applying the economically designed control charts to quality control, equipment condition monitoring, and maintenance decision-making. The primary objective of this research is to design multivariate Bayesian control charts (MVBCH) both for quality control and conditional-based maintenance (CBM) applications. Although considerable research has been done to develop MSPC tools under the assumption that the observations are independent, little attention has been given to the development of MSPC tools for monitoring multivariate autocorrelated processes. In this research, we compare the performance of the squared predication error (SPE) chart using a vector autoregressive moving average with exogenous variables (VARMAX) model and a partial least squares (PLS) model for a multivariate autocorrelated process. The study shows that the use of SPE control charts based on the VARMAX model allows rapid detection of process disturbances while reducing false alarms. Next, the economic and economic-statistical design of a MVBCH for quality control considering the control limit policy proved to be optimal by Makis(2007) is developed. The computational results illustrate that the MVBCH performs considerably better than the MEWMA chart, especially for smaller mean shifts. Sensitivity analyses further explore the impact of the misspecified out-of-control mean on the actual average cost. Finally, design of a MVBCH for CBM applications is considered using the same control limit policy structure and including an observable failure state. Optimization models for the economic and economic statistical design of the MVBCH for a 3 state CBM model are developed and comparison results show that the MVBCH performs better than recently developed CBM Chi-square chart.
194

Evolving Paradigms in the Treatment of Hepatitis B

Woo, Gloria 05 September 2012 (has links)
Hepatitis B is a serious global health problem with over 2 billion people infected worldwide and 350 million suffering from chronic hepatitis B (CHB) infection. Infection can lead to chronic hepatitis, cirrhosis and hepatocellular carcinoma (HCC) accounting for 320,000 deaths per year. Numerous treatments are available, but with a growing number of therapies each with considerable trade-offs, the optimal treatment strategy is not transparent. This dissertation investigates the relative efficacy of treatments for CHB and estimates the health related quality of life (HRQOL) and health utilities of mild to advanced CHB patients. A systematic review of published randomized controlled trials comparing surrogate outcomes for the first year of treatment was performed. Bayesian mixed treatment comparison meta-analysis was used to synthesize odds ratios, including 95% credible intervals and predicted probabilities of each outcome comparing all currently available treatments in HBeAg-positive and/or HBeAg-negative CHB patients. Among HBeAg-positive patients, tenofovir and entecavir were most effective, while in HBeAg-negative patients, tenofovir was the treatment of choice. Health state utilities and HRQOL for patients with CHB stratified by disease stage were elicited from patients attending tertiary care clinics at the University Health Network in Toronto. Respondents completed the standard gamble, EQ5D, Health Utilities Index Mark 3 (HUI3), Short-Form 36 version-2 and a demographics survey in their preferred language of English, Cantonese or Mandarin. Patient charts were accessed to determine disease stage and co-morbidities. The study included 433 patients of which: 294 had no cirrhosis, 79 had compensated cirrhosis, 7 had decompensated cirrhosis, 23 had HCC and 30 had received liver transplants. Mean standard gamble utilities were 0.89, 0.87, 0.82, 0.84 and 0.86 for the respective disease stages. HRQOL in CHB patients was only impaired at later stages of disease. Neither chronic infection nor antiviral treatment lowered HRQOL. Patients with CHB do not experience lower HRQOL as seen in patients with hepatitis C. The next step in this area of research is to incorporate the estimates synthesized by the current studies into a decision model evaluating the cost-effectiveness of treatment to provide guidance on the optimal therapy for patients with HBeAg-positive and HBeAg-negative CHB.
195

Conditioning graphs: practical structures for inference in bayesian networks

Grant, Kevin John 16 January 2007
Probability is a useful tool for reasoning when faced with uncertainty. Bayesian networks offer a compact representation of a probabilistic problem, exploiting independence amongst variables that allows a factorization of the joint probability into much smaller local probability distributions.<p>The standard approach to probabilistic inference in Bayesian networks is to compile the graph into a join­tree, and perform computation over this secondary structure. While join­trees are among the most time­efficient methods of inference in Bayesian networks, they are not always appropriate for certain applications. The memory requirements of join­tree can be prohibitively large. The algorithms for computing over join­trees are large and involved, making them difficult to port to other systems or be understood by general programmers without Bayesian network expertise. <p>This thesis proposes a different method for probabilistic inference in Bayesian networks. We present a data structure called a conditioning graph, which is a run­time representation of Bayesian network inference. The structure mitigates many of the problems of join­tree inference. For example, conditioning graphs require much less space to store and compute over. The algorithm for calculating probabilities from a conditioning graph is small and basic, making it portable to virtually any architecture. And the details of Bayesian network inference are compiled away during the construction of the conditioning graph, leaving an intuitive structure that is easy to understand and implement without any Bayesian network expertise. <p>In addition to the conditioning graph architecture, we present several improvements to the model, that maintain its small and simplistic style while reducing the runtime required for computing over it. We present two heuristics for choosing variable orderings that result in shallower elimination trees, reducing the overall complexity of computing over conditioning graphs. We also demonstrate several compile and runtime extensions to the algorithm, that can produce substantial speedup to the algorithm while adding a small space constant to the implementation. We also show how to cache intermediate values in conditioning graphs during probabilistic computation, that allows conditioning graphs to perform at the same speed as standard methods by avoiding duplicate computation, at the price of more memory. The methods presented also conform to the basic style of the original algorithm. We demonstrate a novel technique for reducing the amount of required memory for caching. <p>We demonstrate empirically the compactness, portability, and ease of use of conditioning graphs. We also show that the optimizations of conditioning graphs allow competitive behaviour with standard methods in many circumstances, while still preserving its small and simple style. Finally, we show that the memory required under caching can be quite modest, meaning that conditioning graphs can be competitive with standard methods in terms of time, using a fraction of the memory.
196

Individualized selection of learning objects

Liu, Jian 15 May 2009
Rapidly evolving Internet and web technologies and international efforts on standardization of learning object metadata enable learners in a web-based educational system ubiquitous access to multiple learning resources. It is becoming more necessary and possible to provide individualized help with selecting learning materials to make the most suitable choice among many alternatives.<p> A framework for individualized learning object selection, called Eliminating and Optimized Selection (EOS), is presented in this thesis. This framework contains a suggestion for extending learning object metadata specifications and presents an approach to selecting a short list of suitable learning objects appropriate for an individual learner in a particular learning context. The key features of the EOS approach are to evaluate the suitability of a learning object in its situated context and to refine the evaluation by using available historical usage information about the learning object. A Learning Preference Survey was conducted to discover and determine the relationships between the importance of learning object attributes and learner characteristics. Two weight models, a Bayesian Network Weight Model and a Naïve Bayes Model, were derived from the data collected in the survey. Given a particular learner, both of these models provide a set of personal weights for learning object features required by the individualized learning object selection.<p> The optimized selection approach was demonstrated and verified using simulated selections. Seventy simulated learning objects were evaluated for three simulated learners within simulated learning contexts. Both the Bayesian Network Weight Model and the Naïve Bayes Model were used in the selection of simulated learning objects. The results produced by the two algorithms were compared, and the two algorithms highly correlated each other in the domain where the testing was conducted.<p> A Learning Object Selection Study was performed to validate the learning object selection algorithms against human experts. By comparing machine selection and human experts selection, we found out that the agreement between machine selection and human experts selection is higher than agreement among the human experts alone.
197

Automated detection of breast cancer using SAXS data and wavelet features

Erickson, Carissa Michelle 02 August 2005
The overarching goal of this project was to improve breast cancer screening protocols first by collecting small angle x-ray scattering (SAXS) images from breast biopsy tissue, and second, by applying pattern recognition techniques as a semi-automatic screen. Wavelet based features were generated from the SAXS image data. The features were supplied to a classifier, which sorted the images into distinct groups, such as normal and tumor. <p>The main problem in the project was to find a set of features that provided sufficient separation for classification into groups of normal and tumor. In the original SAXS patterns, information useful for classification was obscured. The wavelet maps allowed new scale-based information to be uncovered from each SAXS pattern. The new information was subsequently used to define features that allowed for classification. Several calculations were tested to extract useful features from the wavelet decomposition maps. The wavelet map average intensity feature was selected as the most promising feature. The wavelet map intensity feature was improved by using pre-processing to remove the high central intensities from the SAXS patterns, and by using different wavelet bases for the wavelet decomposition. <p>The investigation undertaken for this project showed very promising results. A classification rate of 100% was achieved for distinguishing between normal samples and tumor samples. The system also showed promising results when tested on unrelated MRI data. In the future, the semi-automatic pattern recognition tool developed for this project could be automated. With a larger set of data for training and testing, the tool could be improved upon and used to assist radiologists in the detection and classification of breast lesions.
198

Multivariate Bayesian Process Control

Yin, Zhijian 01 August 2008 (has links)
Multivariate control charts are valuable tools for multivariate statistical process control (MSPC) used to monitor industrial processes and to detect abnormal process behavior. It has been shown in the literature that Bayesian control charts are optimal tools to control the process compared with the non-Bayesian charts. To use any control chart, three control chart parameters must be specified, namely the sample size, the sampling interval and the control limit. Traditionally, control chart design is based on its statistical performance. Recently, industrial practitioners and academic researchers have increasingly recognized the cost benefits obtained by applying the economically designed control charts to quality control, equipment condition monitoring, and maintenance decision-making. The primary objective of this research is to design multivariate Bayesian control charts (MVBCH) both for quality control and conditional-based maintenance (CBM) applications. Although considerable research has been done to develop MSPC tools under the assumption that the observations are independent, little attention has been given to the development of MSPC tools for monitoring multivariate autocorrelated processes. In this research, we compare the performance of the squared predication error (SPE) chart using a vector autoregressive moving average with exogenous variables (VARMAX) model and a partial least squares (PLS) model for a multivariate autocorrelated process. The study shows that the use of SPE control charts based on the VARMAX model allows rapid detection of process disturbances while reducing false alarms. Next, the economic and economic-statistical design of a MVBCH for quality control considering the control limit policy proved to be optimal by Makis(2007) is developed. The computational results illustrate that the MVBCH performs considerably better than the MEWMA chart, especially for smaller mean shifts. Sensitivity analyses further explore the impact of the misspecified out-of-control mean on the actual average cost. Finally, design of a MVBCH for CBM applications is considered using the same control limit policy structure and including an observable failure state. Optimization models for the economic and economic statistical design of the MVBCH for a 3 state CBM model are developed and comparison results show that the MVBCH performs better than recently developed CBM Chi-square chart.
199

Evolving Paradigms in the Treatment of Hepatitis B

Woo, Gloria 05 September 2012 (has links)
Hepatitis B is a serious global health problem with over 2 billion people infected worldwide and 350 million suffering from chronic hepatitis B (CHB) infection. Infection can lead to chronic hepatitis, cirrhosis and hepatocellular carcinoma (HCC) accounting for 320,000 deaths per year. Numerous treatments are available, but with a growing number of therapies each with considerable trade-offs, the optimal treatment strategy is not transparent. This dissertation investigates the relative efficacy of treatments for CHB and estimates the health related quality of life (HRQOL) and health utilities of mild to advanced CHB patients. A systematic review of published randomized controlled trials comparing surrogate outcomes for the first year of treatment was performed. Bayesian mixed treatment comparison meta-analysis was used to synthesize odds ratios, including 95% credible intervals and predicted probabilities of each outcome comparing all currently available treatments in HBeAg-positive and/or HBeAg-negative CHB patients. Among HBeAg-positive patients, tenofovir and entecavir were most effective, while in HBeAg-negative patients, tenofovir was the treatment of choice. Health state utilities and HRQOL for patients with CHB stratified by disease stage were elicited from patients attending tertiary care clinics at the University Health Network in Toronto. Respondents completed the standard gamble, EQ5D, Health Utilities Index Mark 3 (HUI3), Short-Form 36 version-2 and a demographics survey in their preferred language of English, Cantonese or Mandarin. Patient charts were accessed to determine disease stage and co-morbidities. The study included 433 patients of which: 294 had no cirrhosis, 79 had compensated cirrhosis, 7 had decompensated cirrhosis, 23 had HCC and 30 had received liver transplants. Mean standard gamble utilities were 0.89, 0.87, 0.82, 0.84 and 0.86 for the respective disease stages. HRQOL in CHB patients was only impaired at later stages of disease. Neither chronic infection nor antiviral treatment lowered HRQOL. Patients with CHB do not experience lower HRQOL as seen in patients with hepatitis C. The next step in this area of research is to incorporate the estimates synthesized by the current studies into a decision model evaluating the cost-effectiveness of treatment to provide guidance on the optimal therapy for patients with HBeAg-positive and HBeAg-negative CHB.
200

Integration in Computer Experiments and Bayesian Analysis

Karuri, Stella January 2005 (has links)
Mathematical models are commonly used in science and industry to simulate complex physical processes. These models are implemented by computer codes which are often complex. For this reason, the codes are also expensive in terms of computation time, and this limits the number of simulations in an experiment. The codes are also deterministic, which means that output from a code has no measurement error. <br /><br /> One modelling approach in dealing with deterministic output from computer experiments is to assume that the output is composed of a drift component and systematic errors, which are stationary Gaussian stochastic processes. A Bayesian approach is desirable as it takes into account all sources of model uncertainty. Apart from prior specification, one of the main challenges in a complete Bayesian model is integration. We take a Bayesian approach with a Jeffreys prior on the model parameters. To integrate over the posterior, we use two approximation techniques on the log scaled posterior of the correlation parameters. First we approximate the Jeffreys on the untransformed parameters, this enables us to specify a uniform prior on the transformed parameters. This makes Markov Chain Monte Carlo (MCMC) simulations run faster. For the second approach, we approximate the posterior with a Normal density. <br /><br /> A large part of the thesis is focused on the problem of integration. Integration is often a goal in computer experiments and as previously mentioned, necessary for inference in Bayesian analysis. Sampling strategies are more challenging in computer experiments particularly when dealing with computationally expensive functions. We focus on the problem of integration by using a sampling approach which we refer to as "GaSP integration". This approach assumes that the integrand over some domain is a Gaussian random variable. It follows that the integral itself is a Gaussian random variable and the Best Linear Unbiased Predictor (BLUP) can be used as an estimator of the integral. We show that the integration estimates from GaSP integration have lower absolute errors. We also develop the Adaptive Sub-region Sampling Integration Algorithm (ASSIA) to improve GaSP integration estimates. The algorithm recursively partitions the integration domain into sub-regions in which GaSP integration can be applied more effectively. As a result of the adaptive partitioning of the integration domain, the adaptive algorithm varies sampling to suit the variation of the integrand. This "strategic sampling" can be used to explore the structure of functions in computer experiments.

Page generated in 0.0478 seconds