• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3081
  • 943
  • 353
  • 314
  • 185
  • 108
  • 49
  • 49
  • 49
  • 49
  • 49
  • 48
  • 40
  • 37
  • 30
  • Tagged with
  • 6330
  • 1456
  • 1126
  • 1081
  • 845
  • 741
  • 735
  • 723
  • 651
  • 625
  • 510
  • 493
  • 484
  • 484
  • 457
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Bayesian hierarchial modeling for longitudinal frequency data

Jordon, Joseph. January 2005 (has links)
Thesis (M.S.)--Duquesne University, 2005. / Title from document title page. Abstract included in electronic submission form. Includes bibliographical references and abstract.
282

Nonequilibrium Statistical Models: Guided Network Growth Under Localized Information and Perspectives on Electron Diffusion in Conductors

Trevelyan, Alexander 31 October 2018 (has links)
The ability to probe many-particle systems on a microscopic level has revolutionized the way we do statistical physics. As computational capabilities continue to grow exponentially, larger and more complex systems come within reach of microscopic analysis. In the field of network growth, the classical model has given way to competitive processes, in which networks are guided by some criteria at every step of their formation. We develop and analyze a new competitive growth process that permits intervention on growing networks using only local properties of the network when evaluating how to add new connections. We establish the critical behavior of this new method and explore potential uses in guiding the development of real-world networks. The classical system of electrons diffusing within a conductor similarly permits a microscopic analysis where, to date, studies of the macroscopic properties have dominated the literature. In order to extend our understanding of the theory that governs this diffusion—the fluctuation-dissipation theorem—we construct a physical model of the Johnson-Nyquist system of electrons embedded in the bulk of a conductor. Constructing the model involves deriving how the motion of each individual electron comes about via scattering processes in the conductor, then connecting this collective motion to the macroscopic observables of voltage and current that define Johnson-Nyquist noise. Once the equilibrium properties have been fully realized, an external perturbation can be applied in order to probe the behavior of the model as it deviates away from equilibrium. In much the same way that competitive network growth revolutionized classical network theory, we aim to establish a model which can guide future research into nonequilibrium fluctuation-dissipation by providing a method for interacting with the system in a precise and well-controlled manner as it evolves over time. This model is presented in its present form in Chapter 3. Chapter 2, which covers this work, has been published in Physical Review E as a Rapid Communication [1]. The writing and analysis were performed by me as the primary author. Eric Corwin and Georgios Tsekenis are listed as co-authors for their contribution to the analysis and for advisement on the work. This dissertation includes previously published and unpublished co-authored material.
283

Digital computers and geodetic computation : solution of normal equations and error analysis of geodetic networks

Ashkenazi, V. January 1965 (has links)
No description available.
284

Stochasticity and fluctuations in non-equilibrium transport models

Whitehouse, Justin January 2016 (has links)
The transportation of mass is an inherently `non-equilibrium' process, relying on a current of mass between two or more locations. Life exists by necessity out of equilibrium and non-equilibrium transport processes are seen at all levels in living organisms, from DNA replication up to animal foraging. As such, biological processes are ideal candidates for modelling using non-equilibrium stochastic processes, but, unlike with equilibrium processes, there is as of yet no general framework for their analysis. In the absence of such a framework we must study specific models to learn more about the behaviours and bulk properties of systems that are out of equilibrium. In this work I present the analysis of three distinct models of non-equilibrium mass transport processes. Each transport process is conceptually distinct but all share close connections with each other through a set of fundamental nonequilibrium models, which are outlined in Chapter 2. In this thesis I endeavour to understand at a more fundamental level the role of stochasticity and fluctuations in non-equilibrium transport processes. In Chapter 3 I present a model of a diffusive search process with stochastic resetting of the searcher's position, and discuss the effects of an imperfection in the interaction between the searcher and its target. Diffusive search process are particularly relevant to the behaviour of searching proteins on strands of DNA, as well as more diverse applications such as animal foraging and computational search algorithms. The focus of this study was to calculate analytically the effects of the imperfection on the survival probability and the mean time to absorption at the target of the diffusive searcher. I find that the survival probability of the searcher decreases exponentially with time, with a decay constant which increases as the imperfection in the interaction decreases. This study also revealed the importance of the ratio of two length scales to the search process: the characteristic displacement of the searcher due to diffusion between reset events, and an effective attenuation depth related to the imperfection of the target. The second model, presented in Chapter 4, is a spatially discrete mass transport model of the same type as the well-known Zero-Range Process (ZRP). This model predicts a phase transition into a state where there is a macroscopically occupied `condensate' site. This condensate is static in the system, maintained by the balance of current of mass into and out of it. However in many physical contexts, such as traffic jams, gravitational clustering and droplet formation, the condensate is seen to be mobile rather than static. In this study I present a zero-range model which exhibits a moving condensate phase and analyse it's mechanism of formation. I find that, for certain parameter values in the mass `hopping' rate effectively all of the mass forms a single site condensate which propagates through the system followed closely by a short tail of small masses. This short tail is found to be crucial for maintaining the condensate, preventing it from falling apart. Finally, in Chapter 5, I present a model of an interface growing against an opposing, diffusive membrane. In lamellipodia in cells, the ratcheting effect of a growing interface of actin filaments against a membrane, which undergoes some thermal motion, allows the cell to extrude protrusions and move along a surface. The interface grows by way of polymerisation of actin monomers onto actin filaments which make up the structure that supports the interface. I model the growth of this interface by the stochastic polymerisation of monomers using a Kardar-Parisi-Zhang (KPZ) class interface against an obstructing wall that also performs a random walk. I find three phases in the dynamics of the membrane and interface as the bias in the membrane diffusion is varied from towards the interface to away from the interface. In the smooth phase, the interface is tightly bound to the wall and pushes it along at a velocity dependent on the membrane bias. In the rough phase the interface reaches its maximal growth velocity and pushes the membrane at this speed, independently of the membrane bias. The interface is rough, bound to the membrane at a subextensive number of contact points. Finally, in the unbound phase the membrane travels fast enough away from the interface for the two to become uncoupled, and the interface grows as a free KPZ interface. In all of these models stochasticity and fluctuations in the properties of the systems studied play important roles in the behaviours observed. We see modified search times, strong condensation and a dramatic change in interfacial properties, all of which are the consequence of just small modifications to the processes involved.
285

The statistical aspects of Boltzmann's H-theorem

Green, C. D. January 1954 (has links)
This thesis is concerned with the consideration of the H-theorem in a statistical manner and the information that may be derived from it as to the variation with time of an isolated mechanical system, and especially the approach to equilibrium. A historical introduction is given in which it shown how the need for such a statistical approach arose, and hoy the question of the behaviour of the fluctuations about the values of H predicted by the unrestricted H-theorem became important. The type of behaviour suggested by the Ehrenfests is quoted and to verify this it is found to be necessary to consider in detail actual models. Two classical models, the urn model and the wind-wood model, are considered, and then the procedure is generalized so as to include the whole class of models of the type consisting of two groups of particles, the one group moving and interacting with the members of the second group which are fixed. The transition probabilities and the rate of change of H, and the mean time of recurrence of a fluctuation are found for these models by considering the influence of fluctuations upon the Stosszahlansatz values for the numbers of collisions. The results confirm the postulates of the Ehrenfests. In assumptions common to the statistical treatment of collision processes.
286

Modelling genetic algorithms and evolving populations

Rogers, Alex January 2000 (has links)
A formalism for modelling the dynamics of genetic algorithms using methods from statistical physics, originally due to Pr¨ugel-Bennett and Shapiro, is extended to ranking selection, a form of selection commonly used in the genetic algorithm community. The extension allows a reduction in the number of macroscopic variables required to model the mean behaviour of the genetic algorithm. This reduction allows a more qualitative understanding of the dynamics to be developed without sacrificing quantitative accuracy. The work is extended beyond modelling the dynamics of the genetic algorithm. A caricature of an optimisation problem with many local minima is considered — the basin with a barrier problem. The first passage time — the time required to escape the local minima to the global minimum — is calculated and insights gained as to how the genetic algorithm is searching the landscape. The interaction of the various genetic algorithm operators and how these interactions give rise to optimal parameters values is studied.
287

The effect on statistical inference of the degree of precision of rounded data

Tricker, Anthony R. January 1988 (has links)
This thesis concerns the effect of rounding on statistical procedures, where rounding is taken to be the grouping of data at the midpoints of equally spaced intervals. The characteristic function of the rounded distribution is obtained. This is used to derive general expressions for the moments of univariate and bivariate distributions that have been subject to rounding. The interactive effect of rounding and skewness on the moments is examined. The performance of certain normal test statistics is examined for rounded data. A study is carried out to obtain precise values for the significance level and power of these statistical tests for rounded data, over many distributions. Guidance is given on what is an appropriate degree of precision for normal data. Special consideration is given to how much non-normality can be allowed without the effect of rounding seriously distorting the significance level and power of a test. Standard methods of estimating the parameters of a distribution are compared with respect to loss in information caused by rounding. Normal, gamma and exponential distributions are examined. Computational methods are presented for computing the maximum likelihood estimates from rounded normal and gamma data. In general it is concluded that the effect of rounding on statistical procedures can be increased by the departure from normality of the population. It was found that less precision is required of the recorded data than that which is usually given.
288

A study of character recognition using geometric moments under conditions of simple and non-simple loss

Tucker, N. D. January 1974 (has links)
The theory of Loss Functions Is a fundamental part of Statistical Decision Theory and of Pattern Recognition. However It is a subject which few have studied In detail. This thesis is an attempt to develop a simple character recognition process In which losses may be Implemented when and where necessary. After a brief account of the history of Loss Functions and an Introduction to elementary Decision Theory, some examples have been constructed to demonstrate how various decision boundaries approximate to the optimal boundary and what Increase In loss would be associated with these sub-optimal boundaries. The results show that the Euclidean and Hamming distance discriminants can be sufficiently close approximations that the decision process may be legitimately simplified by the use of these linear boundaries. Geometric moments were adopted for the computer simulation of the recognition process because each moment is closely related to the symmetry and structure of a character, unlike many other features. The theory of Moments is discussed, in particular their geometrical properties. A brief description of the programs used in the simulation follows. Two different data sets were investigated, the first being hand-drawn capitals and the second machine-scanned lower case type script. This latter set was in the form of a message, which presented interesting programming problems in itself. The results from the application of different discriminants to these sets under conditions of simple loss are analysed and the recognition efficiencies are found to vary between about 30% and. 99% depending on the number of moments being used and the type of discriminant. Next certain theoretical problems are studied. The relations between the rejection rate, the error rate and the rejection threshold are discussed both theoretically and practically. Also an attempt is made to predict theoretically the variation of efficiency with the number of moments used in the discrimination. This hypothesis is then tested on the data already calculated and shown to be true within reasonable limits. A discussion of moment ordering by defining their re-solving powers is undertaken and it seems likely that the moments normally used unordered are among the most satisfactory. Finally, some time is devoted towards methods of improving recognition efficiency. Information content is discussed along with the possibilities inherent in the use of digraph and trigraph probabilities. A breakdown of the errors in the recognition system adopted here is presented along with suggestions to improve the technique. The execution time of the different decision mechanisms is then inspected and a refined 2-Stage method is produced. Lastly the various methods by which a decision mechanism might be improved are united under a common loss matrix, formed by a product of matrices each of which represents a particular facet of the recognition problem.
289

Analysis of power functions of multiple comparisons tests

Liu, Wei January 1990 (has links)
No description available.
290

Variable selection in high dimensional semi-varying coefficient models

Chen, Chi 06 September 2013 (has links)
With the development of computing and sampling technologies, high dimensionality has become an important characteristic of commonly used science data, such as some data from bioinformatics, information engineering, and the social sciences. The varying coefficient model is a flexible and powerful statistical model for exploring dynamic patterns in many scientific areas. It is a natural extension of classical parametric models with good interpretability, and is becoming increasingly popular in data analysis. The main objective of thesis is to apply the varying coefficient model to analyze high dimensional data, and to investigate the properties of regularization methods for high-dimensional varying coefficient models. We first discuss how to apply local polynomial smoothing and the smoothly clipped absolute deviation (SCAD) penalized methods to estimate varying coefficient models when the dimension of the model is diverging with the sample size. Based on the nonconcave penalized method and local polynomial smoothing, we suggest a regularization method to select significant variables from the model and estimate the corresponding coefficient functions simultaneously. Importantly, our proposed method can also identify constant coefficients at same time. We investigate the asymptotic properties of our proposed method and show that it has the so called “oracle property.” We apply the nonparametric independence Screening (NIS) method to varying coefficient models with ultra-high-dimensional data. Based on the marginal varying coefficient model estimation, we establish the sure independent screening property under some regular conditions for our proposed sure screening method. Combined with our proposed regularization method, we can systematically deal with high-dimensional or ultra-high-dimensional data using varying coefficient models. The nonconcave penalized method is a very effective variable selection method. However, maximizing such a penalized likelihood function is computationally challenging, because the objective functions are nondifferentiable and nonconcave. The local linear approximation (LLA) and local quadratic approximation (LQA) are two popular algorithms for dealing with such optimal problems. In this thesis, we revisit these two algorithms. We investigate the convergence rate of LLA and show that the rate is linear. We also study the statistical properties of the one-step estimate based on LLA under a generalized statistical model with a diverging number of dimensions. We suggest a modified version of LQA to overcome its drawback under high dimensional models. Our proposed method avoids having to calculate the inverse of the Hessian matrix in the modified Newton Raphson algorithm based on LQA. Our proposed methods are investigated by numerical studies and in a real case study in Chapter 5.

Page generated in 0.1277 seconds