• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2208
  • 363
  • 282
  • 176
  • 98
  • 72
  • 38
  • 36
  • 34
  • 25
  • 24
  • 21
  • 21
  • 20
  • 20
  • Tagged with
  • 4031
  • 532
  • 474
  • 469
  • 429
  • 426
  • 418
  • 407
  • 384
  • 366
  • 338
  • 315
  • 288
  • 284
  • 279
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Particle tracking proxies for prediction of CO₂ plume migration within a model selection framework

Bhowmik, Sayantan 24 June 2014 (has links)
Geologic sequestration of CO₂ in deep saline aquifers has been studied extensively over the past two decades as a viable method of reducing anthropological carbon emissions. The monitoring and prediction of the movement of injected CO₂ is important for assessing containment of the gas within the storage volume, and taking corrective measures if required. Given the uncertainty in geologic architecture of the storage aquifers, it is reasonable to depict our prior knowledge of the project area using a vast suite of aquifer models. Simulating such a large number of models using traditional numerical flow simulators to evaluate uncertainty is computationally expensive. A novel stochastic workflow for characterizing the plume migration, based on a model selection algorithm developed by Mantilla in 2011, has been implemented. The approach includes four main steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models using proxies; (2) model clustering using the principle component analysis or multidimensional scaling coupled with the k-mean clustering approach; (3) model selection using the Bayes' rule on the reduced model space, and (4) model expansion using an ensemble pattern-based matching scheme. In this dissertation, two proxies have been developed based on particle tracking in order to assess the flow connectivity of models in the initial set. The proxies serve as fast approximations of finite-difference flow simulation models, and are meant to provide rapid estimations of connectivity of the aquifer models. Modifications have also been implemented within the model selection workflow to accommodate the particular problem of application to a carbon sequestration project. The applicability of the proxies is tested both on synthetic models and real field case studies. It is demonstrated that the first proxy captures areal migration to a reasonable extent, while failing to adequately capture vertical buoyancy-driven flow of CO₂. This limitation of the proxy is addressed in the second proxy, and its applicability is demonstrated not only in capturing horizontal migration but also in buoyancy-driven flow. Both proxies are tested both as standalone approximations of numerical simulation and within the larger model selection framework. / text
572

An efficient algorithm for face sketch synthesis using Markov weight fields and cascade decomposition method

Zhou, Hao, 周浩 January 2012 (has links)
Great progress has been made in face sketch synthesis in recent years. State-of-the-art methods commonly apply a Markov Random Fields (MRF) model to select local sketch patches from a set of training data. Such methods, however, have two major drawbacks. Firstly, the MRF model used cannot synthesize new sketch patches. Secondly, the optimization problem in solving the MRF is NP-hard. In this thesis, a novel Markov Weight Fields (MWF) model is proposed. By applying linear combination of candidate patches, MWF is capable of synthesizing new sketch patches. The MWF model can be formulated into a convex quadratic programming (QP) problem to which the optimal solution is guaranteed. Based on the Markov property of MWF model, a cascade decomposition method (CDM) is further proposed for solving such a large scale QP problem efficiently. Experiments show that the proposed CDM is very efficient, and only takes about 2:4 seconds. To deal with illumination changes of input photos, five special shading patches are included as candidate patches in addition to the patches selected from the training data. These patches help keeping structure of the face under different illumination conditions as well as synthesize shadows similar to the input photos. Extensive experiments on the CUHK face sketch database, AR database and Chinese celebrity photos show that the proposed model outperforms the common MRF model used in other state-of-the-art methods and is robust to illumination changes. / published_or_final_version / Computer Science / Master / Master of Philosophy
573

Distributed computing and cryptography with general weak random sources

Li, Xin, Ph. D. 14 August 2015 (has links)
The use of randomness in computer science is ubiquitous. Randomized protocols have turned out to be much more efficient than their deterministic counterparts. In addition, many problems in distributed computing and cryptography are impossible to solve without randomness. However, these applications typically require uniform random bits, while in practice almost all natural random phenomena are biased. Moreover, even originally uniform random bits can be damaged if an adversary learns some partial information about these bits. In this thesis, we study how to run randomized protocols in distributed computing and cryptography with imperfect randomness. We use the most general model for imperfect randomness where the weak random source is only required to have a certain amount of min-entropy. One important tool here is the randomness extractor. A randomness extractor is a function that takes as input one or more weak random sources, and outputs a distribution that is close to uniform in statistical distance. Randomness extractors are interesting in their own right and are closely related to many other problems in computer science. Giving efficient constructions of randomness extractors with optimal parameters is one of the major open problems in the area of pseudorandomness. We construct network extractor protocols that extract private random bits for parties in a communication network, assuming that they each start with an independent weak random source, and some parties are corrupted by an adversary who sees all communications in the network. These protocols imply fault-tolerant distributed computing protocols and secure multi-party computation protocols where only imperfect randomness is available. The probabilistic method shows that there exists an extractor for two independent sources with logarithmic min-entropy, while known constructions are far from achieving these parameters. In this thesis we construct extractors for two independent sources with any linear min-entropy, based on a computational assumption. We also construct the best known extractors for three independent sources and affine sources. Finally we study the problem of privacy amplification. In this model, two parties share a private weak random source and they wish to agree on a private uniform random string through communications in a channel controlled by an adversary, who has unlimited computational power and can change the messages in arbitrary ways. All previous results assume that the two parties have local uniform random bits. We show that this problem can be solved even if the two parties only have local weak random sources. We also improve previous results in various aspects by constructing the first explicit non-malleable extractor and giving protocols based on this extractor.
574

Modeling cross-classified data with and without the crossed factors' random effects' interaction

Wallace, Myriam Lopez 08 September 2015 (has links)
The present study investigated estimation of the variance of the cross-classified factors’ random effects’ interaction for cross-classified data structures. Results for two different three-level cross-classified random effects model (CCREM) were compared: Model 1 included the estimation of this variance component and Model 2 assumed the value of this variance component was zero and did not estimate it. The second model is the model most commonly assumed by researchers utilizing a CCREM to estimate cross-classified data structures. These two models were first applied to a real world data set. Parameter estimates for both estimating models were compared. The results for this analysis served as a guide to provide generating parameter values for the Monte Carlo simulation that followed. The Monte Carlo simulation was conducted to compare the two estimating models under several manipulated conditions and assess their impact on parameter recovery. The manipulated conditions included: classroom sample size, the structure of the cross-classification, the intra-unit correlation coefficient (IUCC), and the cross-classified factors’ variance component values. Relative parameter and standard error bias were calculated for fixed effect coefficient estimates, random effects’ variance components, and the associated standard errors for both. When Model 1 was used to estimate the simulated data, no substantial bias was found for any of the parameter estimates or their associated standard errors. Further, no substantial bias was found for conditions with the smallest average within-cell sample size (4 students). When Model 2 was used to estimate the simulated data, substantial bias occurred for the level-1 and level-2 variance components. Several of the manipulated conditions in the study impacted the magnitude of the bias for these variance estimates. Given that level-1 and level-2 variance components can often be used to inform researchers’ decisions about factors of interest, like classroom effects, assessment of possible bias in these estimates is important. The results are discussed, followed by implications and recommendations for applied researchers who are using a CCREM to estimate cross-classified data structures. / text
575

The impact of weights’ specifications with the multiple membership random effects model

Galindo, Jennifer Lynn 08 September 2015 (has links)
The purpose of the simulation was to assess the impact of weight pattern assignment when using the multiple membership random effects model (MMREM). In contrast with most previous methodological research using the MMREM, mobility was not randomly assigned; rather the likelihood of student mobility was generated as a function of the student predictor. Two true weights patterns were used to generate the data (random equal and random unequal). For each set of generated data, the true correct weights and two incorrect fixed weight patterns (fixed equal and fixed unequal) that are similar to those used in practice by applied researchers were used to estimate the model. Several design factors were manipulated including the percent mobility, the ICC, and the true generating values of the level one and level two mobility predictors. To assess parameter recovery, relative parameter bias was calculated for the fixed effects and random effects variance components. Standard error (SE) bias was also calculated for the standard errors estimated for each fixed effect. Substantial relative parameter bias differences between weight patterns used were observed for the level two school mobility predictor across conditions as well as the level two random effects variance component, in some conditions. Substantial SE bias differences between weight patterns used were also found for the school mobility predictor in some conditions. Substantial SE and parameter bias was found for some parameters for which it was not anticipated. The results, discussion, future directions for research, and implications for applied researchers are discussed.
576

The Transformed Rejection Method for Generation Random Variables, an Alternative to the Ratio of Uniforms Method

Hörmann, Wolfgang, Derflinger, Gerhard January 1994 (has links) (PDF)
Theoretical considerations and empirical results show that the one-dimensional quality of non-uniform random numbers is bad and the discrepancy is high when they are generated by the ratio of uniforms method combined with linear congruential generators. This observation motivates the suggestion to replace the ratio of uniforms method by transformed rejection (also called exact approximation or almost exact inversion), as the above problem does not occur for this method. Using the function $G(x) =\left( \frac(a)(1-x)+b\right)x $ with appropriate $a$ and $b$ as approximation of the inverse distribution function the transformed rejection method can be used for the same distributions as the ratio of uniforms method. The resulting algorithms for the normal, the exponential and the t-distribution are short and easy to implement. Looking at the number of uniform deviates required, at the code length and at the speed the suggested algorithms are superior to the ratio of uniforms method and compare well with other algorithms suggested in literature. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
577

The Automatic Generation of One- and Multi-dimensional Distributions with Transformed Density Rejection

Leydold, Josef, Hörmann, Wolfgang January 1997 (has links) (PDF)
A rejection algorithm, called ``transformed density rejection", is presented. It uses a new method for constructing simple hat functions for a unimodal density $f$. It is based on the idea of transforming $f$ with a suitable transformation $T$ such that $T(f(x))$ is concave. The hat function is then constructed by taking the pointwise minimum of tangents which are transformed back to the original scale. The resulting algorithm works very well for a large class of distributions and is fast. The method is also extended to the two- and multidimensional case. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
578

Deterministic extractors

Kamp, Jesse John 28 August 2008 (has links)
Not available / text
579

Random Focusing of Tsunami Waves

Degueldre, Henri-Philippe 14 October 2015 (has links)
No description available.
580

Modelling Probability Distributions from Data and its Influence on Simulation

Hörmann, Wolfgang, Bayar, Onur January 2000 (has links) (PDF)
Generating random variates as generalisation of a given sample is an important task for stochastic simulations. The three main methods suggested in the literature are: fitting a standard distribution, constructing an empirical distribution that approximates the cumulative distribution function and generating variates from the kernel density estimate of the data. The last method is practically unknown in the simulation literature although it is as simple as the other two methods. The comparison of the theoretical performance of the methods and the results of three small simulation studies show that a variance corrected version of kernel density estimation performs best and should be used for generating variates directly from a sample. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing

Page generated in 0.0293 seconds