11 |
Improving shared weight neural networks generalization using regularization theory and entropy maximizationKhabou, Mohamed Ali, January 1999 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 1999. / Typescript. Vita. Includes bibliographical references (leaves 114-121). Also available on the Internet.
|
12 |
Evaluation And Modeling Of Streamflow Data: Entropy Method, Autoregressive Models With Asymmetric Innovations And Artificial Neural NetworksSarlak, Nermin 01 June 2005 (has links) (PDF)
In the first part of this study, two entropy methods under different distribution assumptions are examined on a network of stream gauging stations located in Kizilirmak Basin to rank the stations according to their level of importance. The stations are ranked by using two different entropy methods under different distributions. Thus, showing the effect of the distribution type on both entropy methods is aimed.
In the second part of this study, autoregressive models with asymmetric innovations and an artificial neural network model are introduced. Autoregressive models (AR) which have been developed in hydrology are based on several assumptions. The normality assumption for the innovations of AR models is investigated in this study. The main reason of making this assumption in the autoregressive models established is the difficulties faced in finding the model parameters under the distributions other than the normal distributions. From this point of view, introduction of the modified maximum likelihood procedure developed by Tiku et. al. (1996) in estimation of the autoregressive model parameters having non-normally distributed residual series, in the area of hydrology has been aimed. It is also important to consider how the autoregressive model parameters having skewed distributions could be estimated.
Besides these autoregressive models, the artificial neural network (ANN) model was also constructed for annual and monthly hydrologic time series due to its advantages such as no statistical distribution and no linearity assumptions.
The models considered are applied to annual and monthly streamflow data obtained from five streamflow gauging stations in Kizilirmak Basin. It is shown that AR(1) model with Weibull innovations provides best solutions for annual series and AR(1) model with generalized logistic innovations provides best solution for monthly as compared with the results of artificial neural network models.
|
13 |
Stochastic Modelling and Intervention of the Spread of HIV/AIDSAsrul Sani Unknown Date (has links)
Since the first cases of HIV/AIDS disease were recognised in the early 1980s, a large number of mathematical models have been proposed. However, the mobility of people among regions, which has an obvious impact on the spread of the disease, has not been much considered in the modelling studies. One of the main reasons is that the models for the spread of the disease in multiple populations are very complex and, as a consequence, they can easily become intractable. In this thesis we provide various new results pertaining to the spread of the disease in mobile populations, including epidemic intervention in multiple populations. We first develop stochastic models for the spread of the disease in a single heterosexual population, considering both constant and varying population sizes. In particular, we consider a class of continuous-time Markov chains (CTMCs). We establish deterministic and Gaussian diffusion analogues of these stochastic processes by applying the theory of density dependent processes. A range of numerical experiments are provided to show how well the deterministic and Gaussian counterparts approximate the dynamic behaviour of the processes. We derive threshold parameters, known as basic reproduction numbers, for both cases above the threshold which the disease is uniformly persistent and below the threshold which disease-free equilibrium is locally attractive. We find that the threshold conditions for both constant and varying population sizes have the same form. In order to take into account the mobility of people among regions, we extend the stochastic models to multiple populations. Various stochastic models for multiple populations are formulated as CTMCs. The deterministic and Gaussian diffusion counterparts of the corresponding stochastic processes for the multiple populations are also established. Threshold parameters for the persistence of the disease in the multiple population models are derived by applying the concept of next generation matrices. The results of this study can serve as a basic framework how to formulate and analyse a more realistic stochastic model for the spread of HIV in mobile heterogeneous populations—classifying all individuals by age, risk, and level of infectivities, and at the same time considering different modes of the disease transmission. Assuming an accurate mathematical model for the spread of HIV/AIDS disease, another question that we address in this thesis is how to control the spread of the disease in a mobile population. Most previous studies for the spread of the disease focus on identifying the most significant parameters in a model. In contrast, we study these problems as optimal epidemic intervention problems. The study is mostly motivated by the fact that more and more local governments allocate budgets over a certain period of time to combat the disease in their areas. The question is how to allocate this limited budget to minimise the number of new HIV cases, say on a country level, over a finite time horizon as people move among regions. The mathematical models developed in the first part of this thesis are used as dynamic constraints of the optimal control problems. In this thesis, we also introduce a novel approach to solve quite general optimal control problems using the Cross-Entropy (CE) method. The effectiveness of the CE method is demonstrated through several illustrative examples in optimal control. The main application is the optimal epidemic intervention problems discussed above. These are highly non-linear and multidimensional problems. Many existing numerical techniques for solving such optimal control problems suffer from the curse of dimensionality. However, we find that the CE technique is very efficient in solving such problems. The numerical results of the optimal epidemic strategies obtained via the CE method suggest that the structure of the optimal trajectories are highly synchronised among patches but the trajectories do not depend much on the structure of the models. Instead, the parameters of the models (such as the time horizon, the amount of available budget, infection rates) much affect the form of the solution.
|
14 |
Stochastic Modelling and Intervention of the Spread of HIV/AIDSAsrul Sani Unknown Date (has links)
Since the first cases of HIV/AIDS disease were recognised in the early 1980s, a large number of mathematical models have been proposed. However, the mobility of people among regions, which has an obvious impact on the spread of the disease, has not been much considered in the modelling studies. One of the main reasons is that the models for the spread of the disease in multiple populations are very complex and, as a consequence, they can easily become intractable. In this thesis we provide various new results pertaining to the spread of the disease in mobile populations, including epidemic intervention in multiple populations. We first develop stochastic models for the spread of the disease in a single heterosexual population, considering both constant and varying population sizes. In particular, we consider a class of continuous-time Markov chains (CTMCs). We establish deterministic and Gaussian diffusion analogues of these stochastic processes by applying the theory of density dependent processes. A range of numerical experiments are provided to show how well the deterministic and Gaussian counterparts approximate the dynamic behaviour of the processes. We derive threshold parameters, known as basic reproduction numbers, for both cases above the threshold which the disease is uniformly persistent and below the threshold which disease-free equilibrium is locally attractive. We find that the threshold conditions for both constant and varying population sizes have the same form. In order to take into account the mobility of people among regions, we extend the stochastic models to multiple populations. Various stochastic models for multiple populations are formulated as CTMCs. The deterministic and Gaussian diffusion counterparts of the corresponding stochastic processes for the multiple populations are also established. Threshold parameters for the persistence of the disease in the multiple population models are derived by applying the concept of next generation matrices. The results of this study can serve as a basic framework how to formulate and analyse a more realistic stochastic model for the spread of HIV in mobile heterogeneous populations—classifying all individuals by age, risk, and level of infectivities, and at the same time considering different modes of the disease transmission. Assuming an accurate mathematical model for the spread of HIV/AIDS disease, another question that we address in this thesis is how to control the spread of the disease in a mobile population. Most previous studies for the spread of the disease focus on identifying the most significant parameters in a model. In contrast, we study these problems as optimal epidemic intervention problems. The study is mostly motivated by the fact that more and more local governments allocate budgets over a certain period of time to combat the disease in their areas. The question is how to allocate this limited budget to minimise the number of new HIV cases, say on a country level, over a finite time horizon as people move among regions. The mathematical models developed in the first part of this thesis are used as dynamic constraints of the optimal control problems. In this thesis, we also introduce a novel approach to solve quite general optimal control problems using the Cross-Entropy (CE) method. The effectiveness of the CE method is demonstrated through several illustrative examples in optimal control. The main application is the optimal epidemic intervention problems discussed above. These are highly non-linear and multidimensional problems. Many existing numerical techniques for solving such optimal control problems suffer from the curse of dimensionality. However, we find that the CE technique is very efficient in solving such problems. The numerical results of the optimal epidemic strategies obtained via the CE method suggest that the structure of the optimal trajectories are highly synchronised among patches but the trajectories do not depend much on the structure of the models. Instead, the parameters of the models (such as the time horizon, the amount of available budget, infection rates) much affect the form of the solution.
|
15 |
Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured dataWang, Chao, January 2008 (has links)
Thesis (Ph. D.)--Ohio State University, 2008. / Title from first page of PDF file. Includes bibliographical references (p. 140-150).
|
16 |
Investigations into the use of quantified Bayesian maximum entropy methods to generate improved distribution maps and biomass estimates from fisheries acoustic survey data /Heywood, Ben, January 2008 (has links)
Thesis (M.Phil.) - University of St Andrews, April 2008.
|
17 |
Simulation ranking and selection procedures and applications in network reliability designKiekhaefer, Andrew Paul 01 May 2011 (has links)
This thesis presents three novel contributions to the application as well as development of ranking and selection procedures. Ranking and selection is an important topic in the discrete event simulation literature concerned with the use of statistical approaches to select the best or set of best systems from a set of simulated alternatives. Ranking and selection is comprised of three different approaches: subset selection, indifference zone selection, and multiple comparisons. The methodology addressed in this thesis focuses primarily on the first two approaches: subset selection and indifference zone selection.
Our first contribution regards the application of existing ranking and selection procedures to an important body of literature known as system reliability design. If we are capable of modeling a system via a network of arcs and nodes, then the difficult problem of determining the most reliable network configuration, given a set of design constraints, is an optimization problem that we refer to as the network reliability design problem. In this thesis, we first present a novel solution approach for one type of network reliability design optimization problem where total enumeration of the solution space is feasible and desirable. This approach focuses on improving the efficiency of the evaluation of system reliabilities as well as quantifying the probability of correctly selecting the true best design based on the estimation of the expected system reliabilities through the use of ranking and selection procedures, both of which are novel ideas in the system reliability design literature. Altogether, this method eliminates the guess work that was previously associated with this design problem and maintains significant runtime improvements over the existing methodology.
Our second contribution regards the development of a new optimization framework for the network reliability design problem that is applicable to any topological and terminal configuration as well as solution sets of any sizes. This framework focuses on improving the efficiency of the evaluation and comparison of system reliabilities, while providing a more robust performance and user-friendly procedure in terms of the input parameter level selection. This is accomplished through the introduction of two novel statistical sampling procedures based on the concepts of ranking and selection: Sequential Selection of the Best Subset and Duplicate Generation. Altogether, this framework achieves the same convergence and solution quality as the baseline cross-entropy approach, but achieves runtime and sample size improvements on the order of 450% to 1500% over the example networks tested.
Our final contribution regards the development and extension of the general ranking and selection literature with novel procedures for the problem concerned with the selection of the -best systems, where system means and variances are unknown and potentially unequal. We present three new ranking and selection procedures: a subset selection procedure, an indifference zone selection procedure, and a combined two stage subset selection and indifference zone selection procedure. All procedures are backed by proofs of the theoretical guarantees as well as empirical results on the probability of correct selection. We also investigate the effect of various parameters on each procedure's overall performance.
|
18 |
Bayesian-Entropy Method for Probabilistic Diagnostics and Prognostics of Engineering SystemsJanuary 2020 (has links)
abstract: Information exists in various forms and a better utilization of the available information can benefit the system awareness and response predictions. The focus of this dissertation is on the fusion of different types of information using Bayesian-Entropy method. The Maximum Entropy method in information theory introduces a unique way of handling information in the form of constraints. The Bayesian-Entropy (BE) principle is proposed to integrate the Bayes’ theorem and Maximum Entropy method to encode extra information. The posterior distribution in Bayesian-Entropy method has a Bayesian part to handle point observation data, and an Entropy part that encodes constraints, such as statistical moment information, range information and general function between variables. The proposed method is then extended to its network format as Bayesian Entropy Network (BEN), which serves as a generalized information fusion tool for diagnostics, prognostics, and surrogate modeling.
The proposed BEN is demonstrated and validated with extensive engineering applications. The BEN method is first demonstrated for diagnostics of gas pipelines and metal/composite plates for damage diagnostics. Both empirical knowledge and physics model are integrated with direct observations to improve the accuracy for diagnostics and to reduce the training samples. Next, the BEN is demonstrated in prognostics and safety assessment in air traffic management system. Various information types, such as human concepts, variable correlation functions, physical constraints, and tendency data, are fused in BEN to enhance the safety assessment and risk prediction in the National Airspace System (NAS). Following this, the BE principle is applied in surrogate modeling. Multiple algorithms are proposed based on different type of information encoding, such as Bayesian-Entropy Linear Regression (BELR), Bayesian-Entropy Semiparametric Gaussian Process (BESGP), and Bayesian-Entropy Gaussian Process (BEGP) are demonstrated with numerical toy problems and practical engineering analysis. The results show that the major benefits are the superior prediction/extrapolation performance and significant reduction of training samples by using additional physics/knowledge as constraints. The proposed BEN offers a systematic and rigorous way to incorporate various information sources. Several major conclusions are drawn based on the proposed study. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2020
|
19 |
Mesure de luminescence induite par faisceaux d'ions lourds rapides résolue à l'echelle picoseconde / Measurement of picosecond time-resolved, swift heavy ion induced luminescenceDurantel, Florent 13 December 2018 (has links)
Nous avons travaillé sur le développement d’un instrument de mesure de la luminescence induite par un faisceau d’ions lourds (nucléons 12) et d’énergie de l’ordre du MeV/nucléons. Basé sur une méthode de comptage de photons uniques obtenus par coïncidences, le dispositif permet d’obtenir sur 16 voies à la fois un spectre en énergie dans le domaine proche UV-visible-proche IR (185-920 nm) et la réponse temporelle sur la gamme ns-µs, avec un échantillonnage de 100 ps. Des mesures en température peuvent être réalisées depuis la température ambiante jusqu’à 30K.Ce travail met particulièrement l’accent sur les méthodes d’extraction des données : Une fois montrée la nécessité de déconvoluer les signaux, on s’intéresse dans un premier temps à évaluer différents profils instrumentaux modélisés et reconstruit à partir de mesures. A cet effet, un travail de caractérisation temporelle de chaque constituant du dispositif est mené. Puis ces profils instrumentaux sont utilisés dans deux méthodes de déconvolution par moindres carrés d’abord puis par maximum d’entropie ensuite.Deux matériaux types sont testés : Le Titanate de Strontium pour l’étude de la dynamique de l’excitation électronique, et un scintillateur plastique commercial, le BC400, pour l’étude du vieillissement et de la baisse des performances en fonction de la fluence. Dans les deux cas on a pu mettre en évidence la présence d’une composante ultra rapide de constante de temps subnanoseconde. / We developed an instrument for measuring the luminescence induced by a heavy ion beam (nucleons 12) and energy in the range of MeV / nucleon. Based on a single photon counting method obtained by coincidences, the device can provide in the same run a 16-channel energy spectrum in the UV-visible- IR region (185-920 nm) and a time-resolved response in the range of ns up to µs for each channel. Temperature measurements can be performed from room temperature down to 30K.This work places particular emphasis on data extraction methods: Once the need to deconvolve the signals demonstrated the evaluation of different instrument profiles (simulated and reconstructed from measurements) leads to a systematic temporal characterization of each component of the device. Then, these instrumental profiles are used in two deconvolution methods: least squares first followed by maximum entropy method.Two typical materials are tested: the Strontium Titanate for the study of the dynamics of the electronic excitation, and a commercial scintillator, the BC400, for the study of the aging and the decrease of performances with fluence. In both cases, we have been able to highlight the presence of an ultrafast component of subnanosecond time constant.
|
20 |
構文木からの再帰構造の除去による文圧縮MATSUBARA, Shigeki, KATO, Yoshihide, EGAWA, Seiji, 松原, 茂樹, 加藤, 芳秀, 江川, 誠二 18 July 2008 (has links)
No description available.
|
Page generated in 0.0841 seconds