• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2200
  • 363
  • 282
  • 175
  • 98
  • 72
  • 38
  • 35
  • 34
  • 25
  • 24
  • 21
  • 21
  • 20
  • 20
  • Tagged with
  • 4015
  • 524
  • 471
  • 468
  • 425
  • 424
  • 417
  • 402
  • 383
  • 362
  • 338
  • 315
  • 288
  • 283
  • 279
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Meta-Analysis: A Comparison of Fixed Effects and Random Effects Models with Illustrative Examples

Chen, Fang 12 1900 (has links)
Meta-analysis has been widely used in clinical research because it provides a useful tool for combining results from a series of trials addressing the same question. Two major approaches for study-to-study variation can be used in a meta-analysis: the fixed effects model which assumes that each study has the same true effect size, and the random effects model which assumes that the true effect size is a random variable that varies between studies. When there are covariates arising from the study, regression models can be used to explain the effects of these covariates on the between study variation in effect size. The purpose of this project is to draw some general conclusions about the statistical methods used in meta-analyses by re-examining several clinical examples which presented some problems. Four illustrative examples of recent meta-analyses were selected and re-examined. Both fixed effects and random effects models were used. In addition, regression models were used in two examples. Some general conclusions were made about the statistical aspects of meta-analysis from this project. The overall estimate of the fixed effects model tends to be overly influenced by large trials and may results in contradictory conclusions when extreme trials (small vs. large samples) are combined. Therefore, it is advocated that the weights allocated to each trial in any meta-analysis should be explicitly calculated and displayed. The random effects model takes a more balanced account of all studies and considers other unknown factors which may affect the effect size. Therefore, the random effects model and random effects regression model are more appropriate for these clinical data meta-analyses. / Thesis / Master of Science (MS)
352

Some limit theorems for a one-dimensional branching random walk.

Russell, Peter Cleland January 1972 (has links)
No description available.
353

Analytic Results for Hopping Models with Excluded Volume Constraint

Toroczkai, Zoltan 09 April 1997 (has links)
Part I: The Theory of Brownian Vacancy Driven Walk We analyze the lattice walk performed by a tagged member of an infinite 'sea' of particles filling a d-dimensional lattice, in the presence of a single vacancy. The vacancy is allowed to be occupied with probability 1/2d by any of its 2d nearest neighbors, so that it executes a Brownian walk. Particle-particle exchange is forbidden; the only interaction between them being hard core exclusion. Thus, the tagged particle, differing from the others only by its tag, moves only when it exchanges places with the hole. In this sense, it is a random walk "driven" by the Brownian vacancy. The probability distributions for its displacement and for the number of steps taken, after n-steps of the vacancy, are derived. Neither is a Gaussian! We also show that the only nontrivial dimension where the walk is recurrent is d=2. As an application, we compute the expected energy shift caused by a Brownian vacancy in a model for an extreme anisotropic binary alloy. In the last chapter we present a Monte-Carlo study and a mean-field analysis for interface erosion caused by mobile vacancies. Part II: One-Dimensional Periodic Hopping Models with Broken Translational Invariance.Case of a Mobile Directional Impurity We study a random walk on a one-dimensional periodic lattice with arbitrary hopping rates. Further, the lattice contains a single mobile, directional impurity (defect bond), across which the rate is fixed at another arbitrary value. Due to the defect, translational invariance is broken, even if all other rates are identical. The structure of Master equations lead naturally to the introduction of a new entity, associated with the walker-impurity pair which we call the quasi-walker. Analytic solution for the distributions in the steady state limit is obtained. The velocities and diffusion constants for both the random walker and impurity are given, being simply related to that of the quasi-particle through physically meaningful equations. As an application, we extend the Duke-Rubinstein reputation model of gel electrophoresis to include polymers with impurities and give the exact distribution of the steady state. / Ph. D.
354

Reliability based design methodology incorporating residual strength prediction of structural fiber reinforced polymer composites under stochastic variable amplitude fatigue loading

Post, Nathan L. 01 April 2008 (has links)
The research presented in this dissertation furthers the state of the art for reliability-based design of composite structures subjected to high cycle variable amplitude (spectrum) fatigue loads. The focus is on fatigue analyses for axially loaded fiber reinforced polymer (FRP) composites that contain a significant proportion of fibers in the loading direction and thus have fiber-direction dominated failure. The four papers presented in this dissertation describe the logical progression used to develop an improved reliability-based methodology for fatigue-critical design. Throughout the analysis extensive experimental fatigue data on several material systems was used to verify the assumptions and suggest the path forward. A comparison of 12 fatigue model approaches from the literature showed that a simple linear residual strength approach (Broutman and Sahu) provides an improvement in fatigue life prediction compared to the Palmgren-Miner rule, while more complex residual strength models did not consistently improve on Broutman and Sahu. Evaluation of the effect of load history randomness on fatigue life was made using experimental results for spectra in terms of the first order autocorrelation of the stress events. For approximately reversed Rayleigh distributed fatigue loading, load sequence was not critical in the material behavior. Based on observations of empirical data and evaluation of the micro-mechanics deterioration and failure phenomena of FRP composites under fatigue loading, a new residual strength model for the tension and compression under any load history was proposed. Then this model was implemented in a stochastic framework and a method was proposed to enable calculation of the load and resistance factor design (LRFD) parameters for realistic reliabilities with relatively few computations. The proposed approach has significant advantages over traditional lifetime-damage-sum-based reliability analysis and provides a significant step toward enabling more accurate reliability-based design with composite materials. / Ph. D.
355

An Analysis of Random Student Drug Testing Policies and Patterns of Practice In Virginia Public Schools

Lineburg, Mark Young 09 March 2005 (has links)
There were two purposes to this study. First, the study was designed to determine which Virginia public school districts have articulated policies that govern random drug testing of students and if school districts' policies aligned with U.S. Supreme Court standards and Virginia statutes. The second purpose was to ascertain the patterns of practice in selected Virginia school districts that currently conduct random drug testing of students. This included identifying which student groups were being tested and for which drugs. It was also of interest to learn how school districts monitor the testing program and if drug testing practices were aligned with the policies that govern them. Data were gathered by examining student handbooks and district policies in order to determine which school districts had drug testing policies. These policies then were analyzed using a legal framework constructed from U.S. Supreme Court standards that have emerged from case law governing search and seizure in schools. Finally, data on patterns of practice were collected through in-depth interviewing and observation of those individuals responsible for implementing student drug testing in those districts that have such programs. The analyses revealed that the current policies and patterns of practice in random drug testing programs in Virginia public schools comply with Supreme Court standards and state statutes. Student groups subject to testing in Virginia public schools include student athletes and students in extracurricular activities in grades eight through twelve. Monitoring systems in the school districts implementing random drug testing were not consistent. There is evidence that the school districts implementing random drug testing programs have strong community support for the program. / Ed. D.
356

Gaussian Processes for Power System Monitoring, Optimization, and Planning

Jalali, Mana 26 July 2022 (has links)
The proliferation of renewables, electric vehicles, and power electronic devices calls for innovative approaches to learn, optimize, and plan the power system. The uncertain and volatile nature of the integrated components necessitates using swift and probabilistic solutions. Gaussian process regression is a machine learning paradigm that provides closed-form predictions with quantified uncertainties. The key property of Gaussian processes is the natural ability to integrate the sensitivity of the labels with respect to features, yielding improved accuracy. This dissertation tailors Gaussian process regression for three applications in power systems. First, a physics-informed approach is introduced to infer the grid dynamics using synchrophasor data with minimal network information. The suggested method is useful for a wide range of applications, including prediction, extrapolation, and anomaly detection. Further, the proposed framework accommodates heterogeneous noisy measurements with missing entries. Second, a learn-to-optimize scheme is presented using Gaussian process regression that predicts the optimal power flow minimizers given grid conditions. The main contribution is leveraging sensitivities to expedite learning and achieve data efficiency without compromising computational efficiency. Third, Bayesian optimization is applied to solve a bi-level minimization used for strategic investment in electricity markets. This method relies on modeling the cost of the outer problem as a Gaussian process and is applicable to non-convex and hard-to-evaluate objective functions. The designed algorithm shows significant improvement in speed while attaining a lower cost than existing methods. / Doctor of Philosophy / The proliferation of renewables, electric vehicles, and power electronic devices calls for innovative approaches to learn, optimize, and plan the power system. The uncertain and volatile nature of the integrated components necessitates using swift and probabilistic solutions. This dissertation focuses on three practically important problems stemming from the power system modernization. First, a novel approach is proposed that improves power system monitoring, which is the first and necessary step for the stable operation of the network. The suggested method applies to a wide range of applications and is adaptable to use heterogeneous and noisy measurements with missing entries. The second problem focuses on predicting the minimizers of an optimization task. Moreover, a computationally efficient framework is put forth to expedite this process. The third part of this dissertation identifies investment portfolios for electricity markets that yield maximum revenue and minimum cost.
357

A race toward the origin between n random walks

Denby, Daniel Caleb 02 June 2010 (has links)
This dissertation studies systems of "competing" discrete random walks as discrete and continuous time processes. A system is thought of as containing n imaginary particles performing random walks on lines parallel to the x-axis in Cartesian space. The particles act completely independently of each other and have, in general, different starting coordinates. In the discrete time situation, the motion of the n particles is governed by n independent streams of Bernoulli trials with success probabilities p₁, p₂,…, and p<sub>n</sub> respectively. A success for any particle at a trial causes that particle to move one unit toward the origin, and a failure causes it to take a "zero-step" (i.e. remain stationary). A probabilistic description is first given of the positions of the particles at arbitrary points in time, and this is extended to provide time dependent and independent probabilities of which particle is the winner, that is to say, of which particle first reaches the origin. In this case "draws" are possible and the relevant probabilities are derived. The results are expressed, in particular, in terms of Generalized Hypergeometric Functions. In addition, formulae are given for the duration of what may now be regarded as a race with winning post at the origin. In the continuous time situation, the motion of the n particles is governed by n independent Poisson streams, in general, having different parameters. A treatment similar to that for the discrete time situation is given with the exception of draw probabilities which in this case are not possible. Approximations are obtained in many cases. Apart from their practical utility, these give insight into the operation of the systems in that they reveal how changes in one or more of the parameters may affect the win and draw probabilities and also the duration of the race. A chapter is devoted to practical applications. Here it is shown how the theory of random walks racing toward the origin can be utilized as a basic framework for explaining the operation of, and answering pertinent questions concerning several apparently diverse situations. Examples are Lanchester Combat theory, inventory control, reliability and queueing theory. / Ph. D.
358

Random Vector Generation on Large Discrete Spaces

Shin, Kaeyoung 17 December 2010 (has links)
This dissertation addresses three important open questions in the context of generating random vectors having discrete support. The first question relates to the "NORmal To Anything" (NORTA) procedure, which is easily the most widely used amongst methods for general random vector generation. While NORTA enjoys such popularity, there remain issues surrounding its efficient and correct implementation particularly when generating random vectors having denumerable support. These complications stem primarily from having to safely compute (on a digital computer) certain infinite summations that are inherent to the NORTA procedure. This dissertation addresses the summation issue within NORTA through the construction of easily computable truncation rules that can be applied for a range of discrete random vector generation contexts. The second question tackled in this dissertation relates to developing a customized algorithm for generating multivariate Poisson random vectors. The algorithm developed (TREx) is uniformly fast—about hundred to thousand times faster than NORTA—and presents opportunities for straightforward extensions to the case of negative binomial marginal distributions. The third and arguably most important question addressed in the dissertation is that of exact nonparametric random vector generation on finite spaces. Specifically, it is wellknown that NORTA does not guarantee exact generation in dimensions higher than two. This represents an important gap in the random vector generation literature, especially in view of contexts that stipulate strict adherence to the dependency structure of the requested random vectors. This dissertation fully addresses this gap through the development of Maximum Entropy methods. The methods are exact, very efficient, and work on any finite discrete space with stipulated nonparametric marginal distributions. All code developed as part of the dissertation was written in MATLAB, and is publicly accessible through the Web site https://filebox.vt.edu/users/pasupath/pasupath.htm. / Ph. D.
359

A Random Coefficient Analysis of the United States Gasoline Market From 1960-1995

Laffman, John D. 12 September 2002 (has links)
This study uses a random coefficient estimation procedure to analyze the U.S. gasoline market from 1960-1995 with three main objectives: (1) provide an empirical methodology that can estimate a gasoline demand function capable of performing well in prediction; (2) evaluate the elasticities of the models presented to determine which model is more accurate at capturing supply shocks that impacted gasoline demand; and (3) evaluate the behavior of the elasticites of the beta coefficients. This research will show that the variation from historical economic patterns was a result of supply shocks. I argue that when the OLS model of the gasoline market developed by William H. Greene is used supply shocks are not well captured because the coefficients are fixed. If the random coefficient model developed by P.A.V.B. Swamy is introduced, the coefficients vary over time, and thereby, enable supply shocks to be included in the model and more accurate forecasts are produced, as well as, meaningful time patterns in the beta coefficients. / Master of Arts
360

A Reconfigurable Random Access MAC Implementation for Software Defined Radio Platforms

Anyanwu, Uchenna Kevin 03 August 2012 (has links)
Wireless communications technology ranging from satellite communications to sensor networks has benefited from the development of flexible, SDR platforms. SDR is used for military applications in radio devices to reconfigure waveforms, frequency, and modulation schemes in both software and hardware to improve communication performance in harsh environments. In the commercial sector, SDRs are present in cellular infrastructure, where base stations can reconfigure operating parameters to meet specific cellular coverage goals. In response to these enhancements, industry leaders in cellular (such as Lucent, Nortel, and Motorola) have embraced the cost advantages of implementing SDRs in their cellular technology. In the future, there will be a need for more capable SDR platforms on inexpensive hardware that are able to balance work loads between several computational processing elements while minimizing power cost to accomplish multiple goals. This thesis will present the development of a random access MAC protocol for the IRIS platform. An assessment of different SDR hardware and software platforms is conducted. From this assessment, we present several SDR technology requirements for networking research and discuss the impact of these requirements on future SDR platforms. As a consequence of these requirements, we choose the USRP family of SDR hardware and the IRIS software platform to develop our two random access MAC implementations: Aloha with Explicit ACK and Aloha with Implicit ACK. A point-to-point link was tested with our protocol and then this link was extended to a 3-hop (4 nodes) network. To improve our protocols' efficiency, we implemented carrier sensing on the FPGA of the USRP E100, an embedded SDR hardware platform. We also present simulations using OMNeT++ software to accompany our experimental data, and moreover, show how our protocol scales as more nodes are added to the network. / Master of Science

Page generated in 0.0276 seconds