• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2208
  • 363
  • 282
  • 176
  • 98
  • 72
  • 38
  • 36
  • 34
  • 25
  • 24
  • 21
  • 21
  • 20
  • 20
  • Tagged with
  • 4031
  • 532
  • 474
  • 469
  • 429
  • 426
  • 418
  • 407
  • 384
  • 366
  • 338
  • 315
  • 288
  • 284
  • 279
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Using functional annotation to characterize genome-wide association results

Fisher, Virginia Applegate 11 December 2018 (has links)
Genome-wide association studies (GWAS) have successfully identified thousands of variants robustly associated with hundreds of complex traits, but the biological mechanisms driving these results remain elusive. Functional annotation, describing the roles of known genes and regulatory elements, provides additional information about associated variants. This dissertation explores the potential of these annotations to explain the biology behind observed GWAS results. The first project develops a random-effects approach to genetic fine mapping of trait-associated loci. Functional annotation and estimates of the enrichment of genetic effects in each annotation category are integrated with linkage disequilibrium (LD) within each locus and GWAS summary statistics to prioritize variants with plausible functionality. Applications of this method to simulated and real data show good performance in a wider range of scenarios relative to previous approaches. The second project focuses on the estimation of enrichment by annotation categories. I derive the distribution of GWAS summary statistics as a function of annotations and LD structure and perform maximum likelihood estimation of enrichment coefficients in two simulated scenarios. The resulting estimates are less variable than previous methods, but the asymptotic theory of standard errors is often not applicable due to non-convexity of the likelihood function. In the third project, I investigate the problem of selecting an optimal set of tissue-specific annotations with greatest relevance to a trait of interest. I consider three selection criteria defined in terms of the mutual information between functional annotations and GWAS summary statistics. These algorithms correctly identify enriched categories in simulated data, but in the application to a GWAS of BMI the penalty for redundant features outweighs the modest relationships with the outcome yielding null selected feature sets, due to the weaker overall association and high similarity between tissue-specific regulatory features. All three projects require little in the way of prior hypotheses regarding the mechanism of genetic effects. These data-driven approaches have the potential to illuminate unanticipated biological relationships, but are also limited by the high dimensionality of the data relative to the moderate strength of the signals under investigation. These approaches advance the set of tools available to researchers to draw biological insights from GWAS results.
492

Conditional random fields with dynamic potentials for Chinese named entity recognition.

January 2008 (has links)
Wu, Yiu Kei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (p. 69-75). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Chinese NER Problem --- p.1 / Chapter 1.2 --- Contribution of Our Proposed Framework --- p.3 / Chapter 2 --- Related Work --- p.6 / Chapter 2.1 --- Hidden Markov Models --- p.7 / Chapter 2.2 --- Maximum Entropy Models --- p.8 / Chapter 2.3 --- Conditional Random Fields --- p.10 / Chapter 3 --- Our Proposed Model --- p.14 / Chapter 3.1 --- Background --- p.14 / Chapter 3.1.1 --- Problem Formulation --- p.14 / Chapter 3.1.2 --- Conditional Random Fields --- p.16 / Chapter 3.1.3 --- Semi-Markov Conditional Random Fields --- p.26 / Chapter 3.2 --- The Formulation of Our Proposed Model --- p.28 / Chapter 3.2.1 --- The Main Principle --- p.28 / Chapter 3.2.2 --- The Detailed Formulation --- p.36 / Chapter 3.2.3 --- Adapting Features from Original CRF to CRFDP --- p.51 / Chapter 4 --- Experiments --- p.54 / Chapter 4.1 --- Datasets --- p.55 / Chapter 4.2 --- Features --- p.57 / Chapter 4.3 --- Evaluation Metrics --- p.61 / Chapter 4.4 --- Results and Discussion --- p.63 / Chapter 5 --- Conclusions and Future Work --- p.67 / Bibliography --- p.69 / A --- p.76 / B --- p.78 / C --- p.88
493

Sobre a Equivalência dos Modelos Antiferromagnético Diluído e Ferromagnético em Campo Aleatório: Versão Hierárquica / On the equivalence of the diluted antiferromagnetic model and the ferromagnetic model in a random field: hierarchical version

Pontin, Luiz Francisco 24 September 1990 (has links)
Apresentamos uma versão hierárquica do modelo de Ising para mostrar a equivalência entre os modelos ferromagnético em campo aleatório e antiferromagnético diluído em campo uniforme. A equivalência está baseada no fato de que transformações do grupo de renormalização quando aplicadas ao modelo antiferromagnético diluído produzam, como efeito combinado do campo externo e da diluição. um campo externo aleatório na nova escala. Verificamos também que quando não se leva em conta contornos dentro de contornos os modelos analisados apresentam transição de fase para dimensão d maior ou igual a dois. O método usado foi a combinação dos argumentos de Peierls, Imry e Ma as transformações da Teoria do Grupo de Renormalização que na versão hierárquica tornam-se um processo exato. / We are presenting a hierarchical version of Ising modal to show an equivalence between the ferromagnetic model in a random magnetic field and dilute antiferromagnetic modal in a uniform magnetic field. The equivalence is based on the fact that a dilute antiferromagnetic in a uniform magnetic field generates under a renormalization group transformation a random magnetic field. We also verify that when we do not take into account contours inside contours the models analyzed show phase transition for dimension d greater than or equal to two. The method used consist of combination of Peierls, Imry and Ma arguments and the Renormalization Group Transformation, which in the hierarchical approach becomes an exact process.
494

Random Harmonic Polynomials

Unknown Date (has links)
The study of random polynomials and in particular the number and behavior of zeros of random polynomials have been well studied, where the rst signi cant progress was made by Kac, nding an integral formula for the expected number of zeros of real zeros of polynomials with real coe cients. This formula as well as adaptations of the formula to complex polynomials and random elds show an interesting dependency of the number and distribution of zeros on the particular method of randomization. Three prevalent models of signi cant study are the Kostlan model, the Weyl model, and the naive model in which the coe cients of the polynomial are standard Gaussian random variables. A harmonic polynomial is a complex function of the form h(z) = p(z) + q(z) where p and q are complex analytic polynomials. Li and Wei adapted the Kac integral formula for the expected number of zeros to study random harmonic polynomials and take particular interest in their interpretation of the Kostlan model. In this thesis we nd asymptotic results for the number of zeros of random harmonic polynomials under both the Weyl model and the naive model as the degree of the harmonic polynomial increases. We compare the ndings to the Kostlan model as well as to the analytic analogs of each model. We end by establishing results which lead to open questions and conjectures about random harmonic polynomials. We ask and partially answer the question, \When does the number and behavior of the zeros of a random harmonic polynomial asymptotically emulate the same model of random complex analytic polynomial as the degree increases?" We also inspect the variance of the number of zeros of random harmonic polynomials, motivating the work by the question of whether the distribution of the number of zeros concentrates near its as the degree of the harmonic polynomial increases. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
495

Sensitivity Analyses for Tumor Growth Models

Mendis, Ruchini Dilinika 01 April 2019 (has links)
This study consists of the sensitivity analysis for two previously developed tumor growth models: Gompertz model and quotient model. The two models are considered in both continuous and discrete time. In continuous time, model parameters are estimated using least-square method, while in discrete time, the partial-sum method is used. Moreover, frequentist and Bayesian methods are used to construct confidence intervals and credible intervals for the model parameters. We apply the Markov Chain Monte Carlo (MCMC) techniques with the Random Walk Metropolis algorithm with Non-informative Prior and the Delayed Rejection Adoptive Metropolis (DRAM) algorithm to construct parameters' posterior distributions and then obtain credible intervals.
496

Two dimensional cellular automata and pseudorandom sequence generation

Sh, Umer Khayyam 13 November 2019 (has links)
Maximum linear feedback shift registers (LFSRs) based on primitive polynomials are commonly used to generate maximum length sequences (m-sequences). An m-sequence is a pseudorandom sequence that exhibits ideal randomness properties like balance, run and autocorrelation but has low linear complexity. One-dimensional Cellular Automata (1D CA) have been used to generate m-sequences and pseudorandom sequences that have high linear complexity and good randomness. This thesis considers the use of two-dimensional Cellular Automata (2D CA) to generate m-sequences and psuedorandom sequences that have high linear complexity and good randomness. The properties of these sequences are compared with those of the corresponding m-sequences and the best sequences generated by 1D CAs. / Graduate
497

Modeling Market and Regulatory Mechanisms for Pollution Abatement with Sharp and Random Variables

Fielden, Thomas Robert 01 January 2011 (has links)
This dissertation is motivated by the problem of uncertainty and sensitivity in business- class models such as the carbon emission abatement policy model featured in this work. Uncertain model inputs are represented by numerical random variables and a computational methodology is developed to numerically compute business-class models as if sharp inputs were given. A new description for correlation of random variables is presented that arises spontaneously within a numerical model. Methods of numerically computing correlated random variables are implemented in software and represented. The major contribution of this work is a methodology for the numerical computation of models under uncertainty that expresses no preference for unlikelihood of model input combinations. The methodology presented here serves a sharp contrast to traditional Monte Carlo methods that implicitly equate likelihood of model input values with importance of results. The new methodology herein shifts the computational burden from likelihood of inputs to resolution of input space.
498

Ray stretching statistics, hot spot formation, and universalities in weak random disorder

January 2018 (has links)
acase@tulane.edu / I review my three papers on ray stretching statistics, hot spot formation, and universality in motion through weak random media. In the first paper, we study the connection between stretching exponents and ray densities in weak ray scattering through a random medium. The stretching exponent is a quantitative measure that describes the degree of exponential convergence or divergence among nearby ray trajectories. In the context of non-relativistic particle motion through a correlated random potential, we show how particle densities are strongly related to the stretching exponents, where the `hot spots' in the intensity profile correspond to minima in the stretching exponents. This strong connection is expected to be valid for different random potential distributions, and is also expected to apply to other physical contexts, such as deep ocean waves. The surprising minimum in the average stretching exponent is of great interest due to the associated appearance of the first generation of hot spots, and a detailed discussion will be found in the third paper. In the second paper, we study the stretching statistics of weak ray scattering in various physical contexts and for different types of correlated disorder. The stretching exponent is mathematically linked to the monodromy matrix that evolves the phase space vector over time. From this point of view, we demonstrate analytically and numerically that the stretching statistics along the forward direction follow universal scaling relationships for different dispersion relations and in disorders of differing correlation structures. Predictions about the location of first caustics can be made using the universal evolution pattern of stretching exponents. Furthermore, we observe that the distribution of stretching exponents in 2D ray dynamics with small angular spread is equivalent to the same distribution in a simple 1D kicked model, which allows us to further explore the relation between stretching statistics and the form of the disorder. Finally, the third paper focuses on the 1D kicked model with stretching statistics that resemble 2D small-angle ray scattering. While the long time behavior of the stretching exponent displays a simple linear growth, the behavior on the scale of the Lyapunov time is mathematically nontrivial. From an analysis of the evolving monodromy matrices, we demonstrate how the stretching exponent depends on the statistics of the second derivative of the random disorder, especially the mean and standard deviation. Furthermore, the maximal Lyapunov exponent or the Lyapunov length can be expressed as nontrivial functions of the mean and standard deviation of the kicks. Lastly, we show that the higher moments of the second derivative of the disorder have small or negligible effect on the evolution of the stretching exponents or the maximal Lyapunov exponents. / 1 / SicongChen
499

Analysis of Distresses in Asphalt Pavement Transitions on Bridge Approaches and Departures

Rajalingola, Manvitha 03 November 2017 (has links)
Some highway agencies in the United States are experiencing frequent distresses in asphalt pavements on bridge approaches/departures. Commonly observed distresses include alligator cracking and rutting, which reduce roadway smoothness and safety. To lessen the distresses in pavements it is needed to investigate the extent and root causes of the problem. Based on Florida highway conditions, this research study mainly focused on1. Literature review and identification of the extent of the problem; 2. Collection of relevant pavement condition data and descriptive analysis; 3. Development of statistical models to determine factors influencing the distresses in asphalt pavements on bridge approaches/departures. To the best of my knowledge, this is the first study that uses a statistical model to determine the factors that are responsible for causing asphalt pavement distresses on bridge approaches/departures. As part of the literature review, a nationwide questionnaire survey was targeted towards U.S state DOTs. The data collection and analysis specific to the Florida highways found that in 2015 on Florida Interstate highways, about 27% bridges with asphalt pavements on their approaches/departures showed signs of cracking, and about 20% bridges have noticeable rutting in their approach or departure pavements. A random parameter linear regression model was applied to examine the factors that may influence distresses in asphalt pavements in Florida. Pavement condition was evaluated based on the Florida Department of Transportation (FDOT) 2015 pavement condition data and video log images, and other relevant data were collected from various sources such as FDOT Roadway Characteristics Inventory (RCI) database, FDOT pavement management reports, and FDOT Ground Penetrating Radar (GPR) survey reports. A constraint existed in the availability of the GPR data that can give pavement layer thickness, which limited the number of bridge approach pavement sections included in the statistical modeling. Based on the limited data, the estimated results from the random parameter linear regression model showed that the variables influencing distresses in asphalt pavements on bridge approaches/departures, in terms of rutting and roughness, may include pavement age, annual average daily truck traffic, and surface friction course.
500

Analysis and enhancements of adaptive random testing

Merkel, Robert Graham, robert.merkel@benambra.org January 2005 (has links)
Random testing is a standard software testing method. It is a popular method for reli-ability assessment, but its use for debug testing has been opposed by some authorities. Random testing does not use any information to guide test case selection, and so, it is argued, testing is less likely to be effective than other methods. Based on the observation that failures often cluster in contiguous regions, Adaptive Random Testing (ART) is a more effective random testing method. While retaining random selection of test cases, selection is guided by the idea that tests should be widely spread throughout the input domain. A simple way to implement this concept, FSCS-ART, involves randomly generating a number of candidates, and choosing the candidate most widely spread from any already-executed test. This method has already shown to be up to 50% more effective than random testing. This thesis examines a number of theoretical and practical issues related to ART. Firstly, an theoretical examination of the scope of adaptive methods to improve testing effectiveness is conducted. Our results show that the maximum improvement in failure detection effectiveness possible is only 50% - so ART performs close to this limit on many occasions. Secondly, the statistical validity of the previous empirical results is examined. A mathematical analysis of the sampling distribution of the various failure-detection effectiveness methods shows that the measure preferred in previous studies has a slightly unusual distribution known as the geometric distribution, and that that it and other measures are likely to show high variance, requiring very large sample sizes for accurate comparisons. A potential limitation of current ART methods is the relatively high selection overhead. A number of methods to obtain lower overheads are proposed and evaluated, involving a less-strict randomness or wide-spreading criterion. Two methods use dynamic, as-needed partitioning to divide the input domain, spreading test cases throughout the partitions as required. Another involves using a class of numeric sequences called quasi-random sequences. Finally, a more efficient implementation of the existing FSCS-ART method is proposed using the mathematical structure known as the Voronoi diagram. Finally, the use of ART on programs whose input is non-numeric is examined. While existing techniques can be used to generate random non-numeric candidates, a criterion for 'wide spread' is required to perform ART effectively. It is proposed to use the notion of category-partition as such a criterion.

Page generated in 0.038 seconds