• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 307
  • 115
  • 65
  • 34
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 654
  • 134
  • 119
  • 84
  • 73
  • 71
  • 70
  • 63
  • 62
  • 57
  • 56
  • 55
  • 55
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Bayesian inversion framework for subsurface seismic imaging problems

Urozayev, Dias 11 1900 (has links)
This thesis considers the reconstruction of subsurface models from seismic observations, a well-known high-dimensional and ill-posed problem. As a first regularization to such a problem, a reduction of the parameters' space is considered following a truncated Discrete Cosine Transform (DCT). This helps regularizing the seismic inverse problem and alleviates its computational complexity. A second regularization based on Laplace priors as a way of accounting for sparsity in the model is further proposed to enhance the reconstruction quality. More specifically, two Laplace-based penalizations are applied: one for the DCT coefficients and another one for the spatial variations of the subsurface model, which leads to an enhanced representation of cross-correlations of the DCT coefficients. The Laplace priors are represented by hierarchical forms that are suitable for deriving efficient inversion schemes. The corresponding inverse problem, which is formulated within a Bayesian framework, lies in computing the joint posteriors of the target model parameters and the hyperparameters of the introduced priors. Such a joint posterior is indeed approximated using the Variational Bayesian (VB) approach with a separable form of marginals under the minimization of Kullback-Leibler divergence criterion. The VB approach can provide an efficient means of obtaining not only point estimates but also closed forms of the posterior probability distributions of the quantities of interest, in contrast with the classical deterministic optimization methods. The case in which the observations are contaminated with outliers is further considered. For that case, a robust inversion scheme is proposed based on a Student-t prior for the observation noise. The proposed approaches are applied to successfully reconstruct the subsurface acoustic impedance model of the Volve oilfield.
52

Direct Variational Method of Analysis for Elliptic Paraboloidal Shells of Translation

Sun, Chung-Li 06 1900 (has links)
<p> The Rayleigh-Ritz method of Trial Function has been adopted to solve problems of translational shells under uniform external pressure. The basic energetical expressions have been written in terms of direct tensor notation. The stress-strain displacement relations are given in linear sense. Three different kinds of boundary conditions --- all four edges fixed, one pair of edges fixed and another pair of edges simply supported, and all four edges simply supported --- have been analysed. The stress and moment resultants at different points of the shell have been computed by means of IBM 7040, and are plotted into curves and figures to show their variations. The convergence of the displacement function uz is roughly verified. Certain comparison with other established results have been made. The results obtained by the present approach are satisfactory, at least from an engineering point of view.</p> / Thesis / Master of Engineering (MEngr)
53

Genomic Data Augmentation with Variational Autoencoder

Thyrum, Emily 12 1900 (has links)
In order to treat cancer effectively, medical practitioners must predict pathological stages accurately, and machine learning methods can be employed to make such predictions. However, biomedical datasets, including genomic datasets, often have disproportionately more samples from people of European ancestry than people of other ethnic or racial groups, which can cause machine learning methods to perform better on the European samples than on the people of the under-represented groups. Data augmentation can be employed as a potential solution in order to artificially increase the number of samples from people of under-represented racial groups, and can in turn improve pathological stage predictions for future patients from such under-represented groups. Genomic data augmentation has been explored previously, for example using a Generative Adversarial Network, but to the best of our knowledge, the use of the variational autoencoder for the purpose of genomic data augmentation remains largely unexplored. Here we utilize a geometry-based variational autoencoder that models the latent space as a Riemannian manifold so that samples can be generated without the use of a prior distribution to show that the variational autoencoder can indeed be used to reliably augment genomic data. Using TCGA prostate cancer genotype data, we show that our VAE-generated data can improve pathological stage predictions on a test set of European samples. Because we only had European samples that were labeled in terms of pathological stage, we were not able to validate the African generated samples in this way, but we still attempt to show how such samples may be realistic. / Computer and Information Science
54

Variational Modeling of Ionic Polymer-Based Structures

Buechler, Miles A. 10 August 2005 (has links)
Ionomeric polymers are a promising class of intelligent material which exhibit electromechanical coupling similar to that of piezoelectric bimorphs. Ionomeric polymers are much more compliant than piezoelectric ceramics or polymers and have been shown to produce actuation strain on the order of 2\% at operating voltages between 1 V and 3 V \citep{Akle2}. Their high compliance is advantageous in low force sensing configurations because ionic polymers have a very little impact on the dynamics of the measured system. Here we present a variational approach to the dynamic modeling of structures which incorporate ionic polymer materials. The modeling approach requires a priori knowledge of three empirically determined material properties: elastic modulus, dielectric permittivity, and effective strain coefficient. Previous work by Newbury and Leo has demonstrated that these three parameters are strongly frequency dependent in the range between less than 1 Hz to frequencies greater than 1 kHz. Combining the frequency-dependent material paramaters with the variational method produces a second-order matrix representation of the structure. The frequency dependence of the material parameters is incorporated using a complex-property approach similar to the techniques for modeling viscoelastic materials. Three structural models are developed to demonstrate this method. First a cantilever beam model is developed and the material properties of a typical polymer are experimentally determined. These properties are then used to simulate both actuation and sensing response of the transducer. The simulations compare very well to the experimental results. This validates the variational method for modeling ionic polymer structures. Next, a plate model is developed in cylindrical coordinates and simulations are performed using a variety of boundary conditions. Finally a plate model is developed in cartesian coordinates. Methods for applying non-homogenious boundary conditions are then developed and applied to the cartesian coordinate model. Simulations are then compared with experimental data. Again the simulations closely match the experiments validating the modeling method for plate models in 2 dimensions. / Master of Science
55

Nonmonotone Multivalued Mappings

Wang, Rong-yi 02 June 2006 (has links)
Let T be a multivalued mapping from a nonempty subset of a topological vector space into its topological dual. In this paper, we discuss the relationship between the multivalued mapping T satisfying the (S)_+ condition and T satisfying the (S)_+^1 condition. To unify the (S)_+ condition for single-valued and multivalued mappings, we introduce the weak (S)_+ condition for single-valued mappings defined in [9] to multivalued mappings. The above conditions extend naturally to mappings into L(X,Z), where Z is an ordered Hausdorff topological vector space. We also derive some existence results for generalized vector variational inequalities and generalized variational inequalities associated with mappings which satisfy the (S)_+, (S)_+^1 or weak (S)_+ condition.
56

On the solution stability of quasivariational inequality

Lee, Zhi-an 28 January 2008 (has links)
We will study the solution stability of a parametric quasi-variational inequality without the monotonicity assumption of operators. By using the degree theory and the natural map we show that under certain conditions, the solution map of the problem is lower semi-continuous with respect to parameters.
57

Copula Modelling of High-Dimensional Longitudinal Binary Response Data / Copula-modellering av högdimensionell longitudinell binärresponsdata

Henningsson, Nils January 2022 (has links)
This thesis treats the modelling of a high-dimensional data set of longitudinal binary responses. The data consists of default indicators from different nations around the world as well as some explanatory variables such as exposure to underlying assets. The data used for the modelling is an aggregated term which combines several of the default indicators in the data set into one.  The modelling sets out from a portfolio perspective and seeks to find underlying correlations between the nations in the data set as well as see the extreme values produced by a portfolio with assets in the nations in the data set. The modelling takes a copula approach which uses Gaussian copulas to first formulate several different models mathematically and then optimize the parameters in the models to best fit the data set. Models A and B are optimized using standard stochastic gradient ascent on the likelihood function while model C uses variational inference and stochastic gradient ascent on the evidence lower bound for optimization. Using the different Gaussian copulas obtained from the optimization process a portfolio simulation is then done to examine the extreme values. The results show low correlations in models A and B while model C with it's additional regional correlations show slightly higher correlations in three of the subgroups. The portfolio simulations show similar tail behaviour in all three models, however model C produces more extreme risk measure outcomes in the form of higher VaR and ES. / Denna uppsats behandlar modellering av en datauppsättning bestående av högdimensionell longitudinell binärrespons. Datan består av konkursindikatorer för ett flertal suveräna stater runtom världen samt förklarande variabler så som exponering mot underliggande tillgångar. Datan som används i modelleringen är en aggregerad term som slår samman flera av konkursindikatorerna till en term. Modellerandet tar ett portföljperspektiv och försöker att finna underliggande korrelationer mellan nationerna i datamängden så väl som extremförluster som kan komma från en portfölj med tillgångar i de olika länderna som innefattas av datamängden. Utgångspunkten för modellerandet är ett copula-perspektiv som använder Gaussiska copulas där man först försöker matematiskt formulera flertalet modeller för att sedan optimera parametrarna i dessa modeller för att bäst passa datamängden till hands. För modell A och modell B optimeras log-likelihoodfunktionen med hjälp av stochastic gradient ascent medan i modell C används variational inference och sedan optimeras evidence lower bound med hjälp av stochastic gradient ascent. Med hjälp av de anpassade copula-modellerna simuleras sedan olika portföljer för att se vilka extremvärden som kan antas. Resultaten visar små korrelationer i modell A och B medan i modell C, med dess ytterligare regionala korrelationer, visas något större korrelation i tre av undergrupperna. Portföljsimuleringarna visar liknande svansbeteende i alla tre modeller, men modell C ger upphov till större riskmåttvärden i portföljerna i form av högre VaR och ES.
58

Variable sampling in multiparameter Shewhart charts

Chengalur-Smith, Indushobha Narayanan January 1989 (has links)
This dissertation deals with the use of Shewhart control charts, modified to have variable sampling intervals, to simultaneously monitor a set of parameters. Fixed sampling interval control charts are modified to utilize sampling intervals that vary depending on what is being observed from the data. Two problems are emphasized, namely, the simultaneous monitoring of the mean and the variance and the simultaneous monitoring of several means. For each problem, two basic strategies are investigated. One strategy uses separate control charts for each parameter. A second strategy uses a single statistic which combines the information in the entire sample and is sensitive to shifts in any of the parameters. Several variations on these two basic strategies are studied. Numerical studies investigate the optimal number of sampling intervals and the length of the sampling intervals to be used. Each procedure is compared to corresponding fixed interval procedures in terms of time and the number of samples taken to signal. The effect of correlation on multiple means charts is studied through simulation. For both problems, it is seen that the variable sampling interval approach is substantially more efficient than fixed interval procedures, no matter which strategy is used. / Ph. D.
59

A duality approach to gap functions for variational inequalities and equilibrium problems

Lkhamsuren, Altangerel 03 August 2006 (has links) (PDF)
This work aims to investigate some applications of the conjugate duality for scalar and vector optimization problems to the construction of gap functions for variational inequalities and equilibrium problems. The basic idea of the approach is to reformulate variational inequalities and equilibrium problems into optimization problems depending on a fixed variable, which allows us to apply duality results from optimization problems. Based on some perturbations, first we consider the conjugate duality for scalar optimization. As applications, duality investigations for the convex partially separable optimization problem are discussed. Afterwards, we concentrate our attention on some applications of conjugate duality for convex optimization problems in finite and infinite-dimensional spaces to the construction of a gap function for variational inequalities and equilibrium problems. To verify the properties in the definition of a gap function weak and strong duality are used. The remainder of this thesis deals with the extension of this approach to vector variational inequalities and vector equilibrium problems. By using the perturbation functions in analogy to the scalar case, different dual problems for vector optimization and duality assertions for these problems are derived. This study allows us to propose some set-valued gap functions for the vector variational inequality. Finally, by applying the Fenchel duality on the basis of weak orderings, some variational principles for vector equilibrium problems are investigated.
60

Combined decision making with multiple agents

Simpson, Edwin Daniel January 2014 (has links)
In a wide range of applications, decisions must be made by combining information from multiple agents with varying levels of trust and expertise. For example, citizen science involves large numbers of human volunteers with differing skills, while disaster management requires aggregating information from multiple people and devices to make timely decisions. This thesis introduces efficient and scalable Bayesian inference for decision combination, allowing us to fuse the responses of multiple agents in large, real-world problems and account for the agents’ unreliability in a principled manner. As the behaviour of individual agents can change significantly, for example if agents move in a physical space or learn to perform an analysis task, this work proposes a novel combination method that accounts for these time variations in a fully Bayesian manner using a dynamic generalised linear model. This approach can also be used to augment agents’ responses with continuous feature data, thus permitting decision-making when agents’ responses are in limited supply. Working with information inferred using the proposed Bayesian techniques, an information-theoretic approach is developed for choosing optimal pairs of tasks and agents. This approach is demonstrated by an algorithm that maintains a trustworthy pool of workers and enables efficient learning by selecting informative tasks. The novel methods developed here are compared theoretically and empirically to a range of existing decision combination methods, using both simulated and real data. The results show that the methodology proposed in this thesis improves accuracy and computational efficiency over alternative approaches, and allows for insights to be determined into the behavioural groupings of agents.

Page generated in 0.0651 seconds