Spelling suggestions: "subject:"fandom"" "subject:"mandom""
351 |
Analytic Results for Hopping Models with Excluded Volume ConstraintToroczkai, Zoltan 09 April 1997 (has links)
Part I: The Theory of Brownian Vacancy Driven Walk
We analyze the lattice walk performed by a tagged member of an infinite 'sea' of particles filling a d-dimensional lattice, in the presence of a single vacancy. The vacancy is allowed to be occupied with probability 1/2d by any of its 2d nearest neighbors, so that it executes a Brownian walk. Particle-particle exchange is forbidden; the only interaction between them being hard core exclusion. Thus, the tagged particle, differing from the others only by its tag, moves only when it exchanges places with the hole. In this sense, it is a random walk "driven" by the Brownian vacancy. The probability distributions for its displacement and for the number of steps taken, after n-steps of the vacancy, are derived. Neither is a Gaussian! We also show that the only nontrivial dimension where the walk is recurrent is d=2. As an application, we compute the expected energy shift caused by a Brownian vacancy in a model for an extreme anisotropic binary alloy. In the last chapter we present a Monte-Carlo study and a mean-field analysis for interface erosion caused by mobile vacancies.
Part II: One-Dimensional Periodic Hopping Models with Broken Translational Invariance.Case of a Mobile Directional Impurity
We study a random walk on a one-dimensional periodic lattice with arbitrary hopping rates. Further, the lattice contains a single mobile, directional impurity (defect bond), across which the rate is fixed at another arbitrary value. Due to the defect, translational invariance is broken, even if all other rates are identical. The structure of Master equations lead naturally to the introduction of a new entity, associated with the walker-impurity pair which we call the quasi-walker. Analytic solution for the distributions in the steady state limit is obtained. The velocities and diffusion constants for both the random walker and impurity are given, being simply related to that of the quasi-particle through physically meaningful equations. As an application, we extend the Duke-Rubinstein reputation model of gel electrophoresis to include polymers with impurities and give the exact distribution of the steady state. / Ph. D.
|
352 |
Reliability based design methodology incorporating residual strength prediction of structural fiber reinforced polymer composites under stochastic variable amplitude fatigue loadingPost, Nathan L. 01 April 2008 (has links)
The research presented in this dissertation furthers the state of the art for reliability-based design of composite structures subjected to high cycle variable amplitude (spectrum) fatigue loads. The focus is on fatigue analyses for axially loaded fiber reinforced polymer (FRP) composites that contain a significant proportion of fibers in the loading direction and thus have fiber-direction dominated failure. The four papers presented in this dissertation describe the logical progression used to develop an improved reliability-based methodology for fatigue-critical design. Throughout the analysis extensive experimental fatigue data on several material systems was used to verify the assumptions and suggest the path forward.
A comparison of 12 fatigue model approaches from the literature showed that a simple linear residual strength approach (Broutman and Sahu) provides an improvement in fatigue life prediction compared to the Palmgren-Miner rule, while more complex residual strength models did not consistently improve on Broutman and Sahu. Evaluation of the effect of load history randomness on fatigue life was made using experimental results for spectra in terms of the first order autocorrelation of the stress events. For approximately reversed Rayleigh distributed fatigue loading, load sequence was not critical in the material behavior. Based on observations of empirical data and evaluation of the micro-mechanics deterioration and failure phenomena of FRP composites under fatigue loading, a new residual strength model for the tension and compression under any load history was proposed. Then this model was implemented in a stochastic framework and a method was proposed to enable calculation of the load and resistance factor design (LRFD) parameters for realistic reliabilities with relatively few computations. The proposed approach has significant advantages over traditional lifetime-damage-sum-based reliability analysis and provides a significant step toward enabling more accurate reliability-based design with composite materials. / Ph. D.
|
353 |
An Analysis of Random Student Drug Testing Policies and Patterns of Practice In Virginia Public SchoolsLineburg, Mark Young 09 March 2005 (has links)
There were two purposes to this study. First, the study was designed to determine which Virginia public school districts have articulated policies that govern random drug testing of students and if school districts' policies aligned with U.S. Supreme Court standards and Virginia statutes. The second purpose was to ascertain the patterns of practice in selected Virginia school districts that currently conduct random drug testing of students. This included identifying which student groups were being tested and for which drugs. It was also of interest to learn how school districts monitor the testing program and if drug testing practices were aligned with the policies that govern them. Data were gathered by examining student handbooks and district policies in order to determine which school districts had drug testing policies. These policies then were analyzed using a legal framework constructed from U.S. Supreme Court standards that have emerged from case law governing search and seizure in schools. Finally, data on patterns of practice were collected through in-depth interviewing and observation of those individuals responsible for implementing student drug testing in those districts that have such programs. The analyses revealed that the current policies and patterns of practice in random drug testing programs in Virginia public schools comply with Supreme Court standards and state statutes. Student groups subject to testing in Virginia public schools include student athletes and students in extracurricular activities in grades eight through twelve. Monitoring systems in the school districts implementing random drug testing were not consistent. There is evidence that the school districts implementing random drug testing programs have strong community support for the program. / Ed. D.
|
354 |
Gaussian Processes for Power System Monitoring, Optimization, and PlanningJalali, Mana 26 July 2022 (has links)
The proliferation of renewables, electric vehicles, and power electronic devices calls for innovative approaches to learn, optimize, and plan the power system. The uncertain and volatile nature of the integrated components necessitates using swift and probabilistic solutions.
Gaussian process regression is a machine learning paradigm that provides closed-form predictions with quantified uncertainties. The key property of Gaussian processes is the natural ability to integrate the sensitivity of the labels with respect to features, yielding improved accuracy. This dissertation tailors Gaussian process regression for three applications in power systems. First, a physics-informed approach is introduced to infer the grid dynamics using synchrophasor data with minimal network information. The suggested method is useful for a wide range of applications, including prediction, extrapolation, and anomaly detection. Further, the proposed framework accommodates heterogeneous noisy measurements with missing entries. Second, a learn-to-optimize scheme is presented using Gaussian process regression that predicts the optimal power flow minimizers given grid conditions.
The main contribution is leveraging sensitivities to expedite learning and achieve data efficiency without compromising computational efficiency. Third, Bayesian optimization is applied to solve a bi-level minimization used for strategic investment in electricity markets.
This method relies on modeling the cost of the outer problem as a Gaussian process and is applicable to non-convex and hard-to-evaluate objective functions. The designed algorithm shows significant improvement in speed while attaining a lower cost than existing methods. / Doctor of Philosophy / The proliferation of renewables, electric vehicles, and power electronic devices calls for innovative approaches to learn, optimize, and plan the power system. The uncertain and volatile nature of the integrated components necessitates using swift and probabilistic solutions.
This dissertation focuses on three practically important problems stemming from the power system modernization. First, a novel approach is proposed that improves power system monitoring, which is the first and necessary step for the stable operation of the network.
The suggested method applies to a wide range of applications and is adaptable to use heterogeneous and noisy measurements with missing entries. The second problem focuses on predicting the minimizers of an optimization task. Moreover, a computationally efficient framework is put forth to expedite this process. The third part of this dissertation identifies investment portfolios for electricity markets that yield maximum revenue and minimum cost.
|
355 |
A race toward the origin between n random walksDenby, Daniel Caleb 02 June 2010 (has links)
This dissertation studies systems of "competing" discrete random walks as discrete and continuous time processes. A system is thought of as containing n imaginary particles performing random walks on lines parallel to the x-axis in Cartesian space. The particles act completely independently of each other and have, in general, different starting coordinates.
In the discrete time situation, the motion of the n particles is governed by n independent streams of Bernoulli trials with success probabilities p₁, p₂,…, and p<sub>n</sub> respectively. A success for any particle at a trial causes that particle to move one unit toward the origin, and a failure causes it to take a "zero-step" (i.e. remain stationary). A probabilistic description is first given of the positions of the particles at arbitrary points in time, and this is extended to provide time dependent and independent probabilities of which particle is the winner, that is to say, of which particle first reaches the origin.
In this case "draws" are possible and the relevant probabilities are derived. The results are expressed, in particular, in terms of Generalized Hypergeometric Functions. In addition, formulae are given for the duration of what may now be regarded as a race with winning post at the origin.
In the continuous time situation, the motion of the n particles is governed by n independent Poisson streams, in general, having different parameters. A treatment similar to that for the discrete time situation is given with the exception of draw probabilities which in this case are not possible.
Approximations are obtained in many cases. Apart from their practical utility, these give insight into the operation of the systems in that they reveal how changes in one or more of the parameters may affect the win and draw probabilities and also the duration of the race.
A chapter is devoted to practical applications. Here it is shown how the theory of random walks racing toward the origin can be utilized as a basic framework for explaining the operation of, and answering pertinent questions concerning several apparently diverse situations. Examples are Lanchester Combat theory, inventory control, reliability and queueing theory. / Ph. D.
|
356 |
Random Vector Generation on Large Discrete SpacesShin, Kaeyoung 17 December 2010 (has links)
This dissertation addresses three important open questions in the context of generating random vectors having discrete support. The first question relates to the "NORmal To Anything" (NORTA) procedure, which is easily the most widely used amongst methods for general random vector generation. While NORTA enjoys such popularity, there remain issues surrounding its efficient and correct implementation particularly when generating random vectors having denumerable support. These complications stem primarily from having to safely compute (on a digital computer) certain infinite summations that are inherent to the NORTA procedure. This dissertation addresses the summation issue within NORTA through the construction of easily computable truncation rules that can be applied for a range of discrete random vector generation contexts.
The second question tackled in this dissertation relates to developing a customized algorithm for generating multivariate Poisson random vectors. The algorithm developed (TREx) is uniformly fast—about hundred to thousand times faster than NORTA—and presents opportunities for straightforward extensions to the case of negative binomial marginal distributions.
The third and arguably most important question addressed in the dissertation is that of exact nonparametric random vector generation on finite spaces. Specifically, it is wellknown that NORTA does not guarantee exact generation in dimensions higher than two. This represents an important gap in the random vector generation literature, especially in view of contexts that stipulate strict adherence to the dependency structure of the requested random vectors. This dissertation fully addresses this gap through the development of Maximum Entropy methods. The methods are exact, very efficient, and work on any finite discrete space with stipulated nonparametric marginal distributions. All code developed as part of the dissertation was written in MATLAB, and is publicly accessible through the Web site https://filebox.vt.edu/users/pasupath/pasupath.htm. / Ph. D.
|
357 |
A Random Coefficient Analysis of the United States Gasoline Market From 1960-1995Laffman, John D. 12 September 2002 (has links)
This study uses a random coefficient estimation procedure to analyze the U.S. gasoline market from 1960-1995 with three main objectives: (1) provide an empirical methodology that can estimate a gasoline demand function capable of performing well in prediction; (2) evaluate the elasticities of the models presented to determine which model is more accurate at capturing supply shocks that impacted gasoline demand; and (3) evaluate the behavior of the elasticites of the beta coefficients.
This research will show that the variation from historical economic patterns was a result of supply shocks. I argue that when the OLS model of the gasoline market developed by William H. Greene is used supply shocks are not well captured because the coefficients are fixed. If the random coefficient model developed by P.A.V.B. Swamy is introduced, the coefficients vary over time, and thereby, enable supply shocks to be included in the model and more accurate forecasts are produced, as well as, meaningful time patterns in the beta coefficients. / Master of Arts
|
358 |
Jumping Connections: A Graph-Theoretic Model for Recommender SystemsMirza, Batul J. 14 March 2001 (has links)
Recommender systems have become paramount to customize information access and reduce information overload. They serve multiple uses, ranging from suggesting products and artifacts (to consumers), to bringing people together by the connections induced by (similar) reactions to products and services. This thesis presents a graph-theoretic model that casts recommendation as a process of 'jumping connections' in a graph. In addition to emphasizing the social network aspect, this viewpoint provides a novel evaluation criterion for recommender systems. Algorithms for recommender systems are distinguished not in terms of predicted ratings of services/artifacts, but in terms of the combinations of people and artifacts that they bring together. We present an algorithmic framework drawn from random graph theory and outline an analysis for one particular form of jump called a 'hammock.' Experimental results on two datasets collected over the Internet demonstrate the validity of this approach. / Master of Science
|
359 |
Generating Random Graphs with Tunable Clustering CoefficientParikh, Nidhi Kiranbhai 29 April 2011 (has links)
Most real-world networks exhibit a high clustering coefficient— the probability that two neighbors of a node are also neighbors of each other. We propose four algorithms CONF-1, CONF-2, THROW-1, and THROW-2 which are based on the configuration model and that take triangle degree sequence (representing the number of triangles/corners at a node) and single-edge degree sequence (representing the number of single-edges/stubs at a node) as input and generate a random graph with a tunable clustering coefficient. We analyze them theoretically and empirically for the case of a regular graph. CONF-1 and CONF-2 generate a random graph with the degree sequence and the clustering coefficient anticipated from the input triangle and single-edge degree sequences. At each time step, CONF-1 chooses each node for creating triangles or single edges with the same probability, while CONF-2 chooses a node for creating triangles or single edge with a probability proportional to their number of unconnected corners or unconnected stubs, respectively. Experimental results match quite well with the anticipated clustering coefficient except for highly dense graphs, in which case the experimental clustering coefficient is higher than the anticipated value. THROW-2 chooses three distinct nodes for creating triangles and two distinct nodes for creating single edges, while they need not be distinct for THROW-1. For THROW-1 and THROW-2, the degree sequence and the clustering coefficient of the generated graph varies from the input. However, the expected degree distribution, and the clustering coefficient of the generated graph can also be predicted using analytical results. Experiments show that, for THROW-1 and THROW-2, the results match quite well with the analytical results. Typically, only information about degree sequence or degree distribution is available. We also propose an algorithm DEG that takes degree sequence and clustering coefficient as input and generates a graph with the same properties. Experiments show results for DEG that are quite similar to those for CONF-1 and CONF-2. / Master of Science
|
360 |
A Reconfigurable Random Access MAC Implementation for Software Defined Radio PlatformsAnyanwu, Uchenna Kevin 03 August 2012 (has links)
Wireless communications technology ranging from satellite communications to sensor networks has benefited from the development of flexible, SDR platforms. SDR is used for military applications in radio devices to reconfigure waveforms, frequency, and modulation schemes in both software and hardware to improve communication performance in harsh environments. In the commercial sector, SDRs are present in cellular infrastructure, where base stations can reconfigure operating parameters to meet specific cellular coverage goals. In response to these enhancements, industry leaders in cellular (such as Lucent, Nortel, and Motorola) have embraced the cost advantages of implementing SDRs in their cellular technology. In the future, there will be a need for more capable SDR platforms on inexpensive hardware that are able to balance work loads between several computational processing elements while minimizing power cost to accomplish multiple goals.
This thesis will present the development of a random access MAC protocol for the IRIS platform. An assessment of different SDR hardware and software platforms is conducted. From this assessment, we present several SDR technology requirements for networking research and discuss the impact of these requirements on future SDR platforms. As a consequence of these requirements, we choose the USRP family of SDR hardware and the IRIS software platform to develop our two random access MAC implementations: Aloha with Explicit ACK and Aloha with Implicit ACK. A point-to-point link was tested with our protocol and then this link was extended to a 3-hop (4 nodes) network. To improve our protocols' efficiency, we implemented carrier sensing on the FPGA of the USRP E100, an embedded SDR hardware platform. We also present simulations using OMNeT++ software to accompany our experimental data, and moreover, show how our protocol scales as more nodes are added to the network. / Master of Science
|
Page generated in 0.0331 seconds