• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2208
  • 363
  • 282
  • 176
  • 98
  • 72
  • 38
  • 36
  • 34
  • 25
  • 24
  • 21
  • 21
  • 20
  • 20
  • Tagged with
  • 4031
  • 532
  • 474
  • 469
  • 429
  • 426
  • 418
  • 407
  • 384
  • 366
  • 338
  • 315
  • 288
  • 284
  • 279
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Tests of the Efficient Markets Hypothesis

Reschenhofer, Erhard, Hauser, Michael A. January 1997 (has links) (PDF)
This paper surveys various statistical methods that have been proposed for the examination of the efficiency of financial markets and proposes a novel procedure for testing the predictability of a time series. For illustration, this procedure is applied to Austrian stock return series.
662

Topics In Probabilistic Combinatorics

Johnson, Darin Bryant 01 January 2009 (has links)
This paper is a compilation of results in combinatorics utilizing the probabilistic method. Below is a brief description of the results highlighted in each chapter. Chapter 1 provides basic definitions, lemmas, and theorems from graph theory, asymptotic analysis, and probability which will be used throughout the paper. Chapter 2 introduces the independent domination number. It is then shown that in the random graph model G(n,p) with probability tending to one, the independent domination number is one of two values. Also, the the number of independent dominating sets of given cardinality is analyzed statistically. Chapter 3 introduces the tree domination number. It is then shown that in the random graph model G(n,p) with probability tending to one, the tree domination number is one of two values. Additional related domination parameters are also discussed. Chapter 4 introduces a generalized rook polynomial first studied by J. Goldman et al. Central and local limit theorems are then proven for certain classes of the generalized rook polynomial. Special cases include known central and local limit theorems for the Stirling numbers of the first and second kind and additionally new limit theorems for the Lah numbers and certain classes of known generalized Stirling numbers. Chapter 5 introduces the Kneser Graph. The exact expected value and variance of the distance between [n] and a vertex chosen uniformly at random is given. An asymptotic formula for the expectation is found.
663

The non-equilibrium statistical physics of stochastic search, foraging and clustering

Bhat, Uttam 02 February 2018 (has links)
This dissertation explores two themes central to the field of non-equilibrium statistical physics. The first is centered around the use of random walks, first-passage processes, and Brownian motion to model basic stochastic search processes found in biology and ecological systems. The second is centered around clustered networks: how clustering modifies the nature of transition in the appearance of various graph motifs and their use in modeling social networks. In the first part of this dissertation, we start by investigating properties of intermediate crossings of Brownian paths. We develop simple analytical tools to obtain probability distributions of intermediate crossing positions and intermediate crossing times of Brownian paths. We find that the distribution of intermediate crossing times can be unimodal or bimodal. Next, we develop analytical and numerical methods to solve a system of 𝑁 diffusive searchers which are reset to the origin at stochastic or periodic intervals. We obtain the optimal criteria to search for a fixed target in one, two and three dimensions. For these two systems, we also develop efficient ways to simulate Brownian paths, where the simulation kernel makes maximal use of first-passage ideas. Finally we develop a model to understand foraging in a resource-rich environment. Specifically, we investigate the role of greed on the lifetime of a diffusive forager. This lifetime shows non-monotonic dependence on greed in one and two dimensions, and surprisingly, a peak for negative greed in 1d. In the second part of this dissertation, we develop simple models to capture the non-tree-like (clustering) aspects of random networks that arise in the real world. By 'clustered networks', we specifically mean networks where the probability of links between neighbors of a node (i.e., 'friends of friends') is positive. We discuss three simple and related models. We find a series of transitions in the density of graph motifs such as triangles (3-cliques), 4-cliques etc as a function of the clustering probability. We also find that giant 3-cores emerge through first- or second-order, or even mixed transitions in clustered networks.
664

Improved detection and quantisation of keypoints in the complex wavelet domain

Gale, Timothy Edward January 2018 (has links)
An algorithm which is able to consistently identify features in an image is a basic building block of many object recognition systems. Attaining sufficient consistency is challenging, because factors such as pose and lighting can dramatically change a feature’s appearance. Effective feature identification therefore requires both a reliable and accurate keypoint detector and a discriminative categoriser (or quantiser). The Dual Tree Complex Wavelet Transform (DTCWT) decomposes an image into oriented subbands at a range of scales. The resulting domain is arguably well suited for further image analysis tasks such as feature identification. This thesis develops feature identification in the complex wavelet domain, building on previous keypoint detection work and exploring the use of random forests for descriptor quantisation. Firstly, we extended earlier work on keypoint detection energy functions. Existing complex wavelet based detectors were observed to suffer from two defects: a tendency to produce keypoints on straight edges at particular orientations and sensitivity to small translations of the image. We introduced a new corner energy function based on the Same Level Product (SLP) transform. This function performed well compared to previous ones, combining competitive edge rejection and positional stability properties. Secondly, we investigated the effect of changing the resolution at which the energy function is sampled. We used the undecimated DTCWT to calculate energy maps at the same resolution as the original images. This revealed the presence of fine details which could not be accurately interpolated from an energy map at the standard resolution. As a result, doubling the resolution of the map along each axis significantly improved both the reliability and posi-tional accuracy of detections. However, calculating the map using interpolated coefficients resulted in artefacts introduced by inaccuracies in the interpolation. We therefore proposed a modification to the standard DTCWT structure which doubles its output resolution for a modest computational cost. Thirdly, we developed a random forest based quantiser which operates on complex wavelet polar matching descriptors, with optional rotational invariance. Trees were evaluated on the basis of how consistently they quantised features into the same bins, and several examples of each feature were obtained by means of tracking. We found that the trees produced the most consistent quantisations when they were trained with a second set of tracked keypoints. Detecting keypoints using the the higher resolution energy maps also resulted in more consistent quantiser outputs, indicating the importance of the choice of detector on quantiser performance. Finally, we introduced a fast implementation of the DTCWT, keypoint detection and descriptor extraction algorithms for OpenCL-capable GPUs. Several aspects were optimised to enable it to run more efficiently on modern hardware, allowing it to process HD footage in faster than real time. This particularly aided the development of the detector algorithms by permitting interactive exploration of their failure modes using a live camera feed.
665

Nonequilibrium emergent interactions between run-and-tumble random walkers

Slowman, Alexander Barrett January 2018 (has links)
Nonequilibrium statistical physics involves the study of many-particle systems that break time reversibility|also known as detailed balance|at some scale. For states in thermal equilibrium, which must respect detailed balance, the comprehensive theory of statistical mechanics was developed to explain how their macroscopic properties arise from interactions between their microscopic constituent particles; for nonequilibrium states no such theory exists. The study of active matter, made up of particles that individually transduce free energy to produce systematic movement, provides a paradigm in which to develop an understanding of nonequilibrium behaviours. In this thesis, we are interested in particular in the microscopic interactions that generate the clustering of active particles that has been widely observed in simulations, and may have biological relevance to the formation of bacterial assemblages known as biofilms, which are an important source of human infection. The focus of this thesis is a microscopic lattice-based model of two random walkers interacting under mutual exclusion and undergoing the run-and-tumble dynamics that characterise the motion of certain species of bacteria, notably Escherichia coli. I apply perturbative and exact analytic approaches from statistical physics to three variants of the model in order to find the probability distributions of their nonequilibrium steady states and elucidate the emergent interactions that manifest. I first apply a generating function approach to the model on a one-dimensional periodic lattice where the particles perform straight line runs randomly interspersed by instantaneous velocity reversals or tumbles, and find an exact solution to the stationary probability distribution. The distribution can be interpreted as an effective non-equilibrium pair potential that leads to a finite-range attraction in addition to jamming between the random walkers. The finite-range attraction collapses to a delta function in the limit of continuous space and time, but the combination of this jamming and attraction is suffciently strong that even in this continuum limit the particles spend a finite fraction of time next to each other. Thus, although the particles only interact directly through repulsive hard-core exclusion, the activity of the particles causes the emergence of attractive interactions, which do not arise between passive particles with repulsive interactions and dynamics respecting detailed balance. I then relax the unphysical assumption of instantaneous tumbling and extend the interacting run-and-tumble model to incorporate a finite tumbling duration, where a tumbling particle remains stationary on its site. Here the exact solution for the nonequilibrium stationary state is derived using a generalisation of the previous generating function approach. This steady state is characterised by two lengthscales, one arising from the jamming of approaching particles, familiar from the instant tumbling model, and the other from one particle moving when the other is tumbling. The first of these lengthscales vanishes in a scaling limit where continuum dynamics is recovered. However, the second, entirely new, lengthscale remains finite. These results show that the feature of a finite tumbling duration is relevant to the physics of run-and-tumble interactions. Finally, I explore the effect of walls on the interacting run-and-tumble model by applying a perturbative graph-theoretic approach to the model with reflecting boundaries. Confining the particles in this way leads to a probability distribution in the low tumble limit with a much richer structure than the corresponding limit for the model on a periodic lattice. This limiting probability distribution indicates that an interaction over a finite distance emerges not just between the particles, but also between the particles and the reflecting boundaries. Together, these works provide a potential pathway towards understanding the clustering of self-propelled particles widely observed in active matter from a microscopic perspective.
666

Genetic and non-genetic evaluation tools for accelerating improvement in beef cattle carcass traits within and across country

Englishby, Tanya Marie January 2018 (has links)
The main revenue source for beef cattle farmers is the price they are awarded for carcasses based on carcass value (i.e., carcass weight, conformation and fat score) which is influenced by genetic and environmental factors (e.g., herd management). In order to improve profitability, accurate means of evaluating and improving both sets of factors influencing carcass trait performance are necessary. This would entail optimal management of genetic resources and herd practices. Furthermore, access to a large international germplasm pool would facilitate faster genetic gain. The objective of this thesis was to generate tools for the enhancement of carcass trait genetic and herd management evaluations both at a national and international level. The data used in the thesis originated from the Irish and UK national cattle databases and consisted of 336,944 Irish and 147,876 UK cattle of multiple beef and dairy breeds from 9,572 Irish and 3,385 UK commercial herds. Livestock mature at different rates depending on a number of factors including the genetic background; therefore, the optimum age at which to slaughter the progeny of different sires may differ. Chapter 2 examined sire level genetic profiles for three carcass traits (carcass weight, conformation and fat score) in cattle using data from the Republic of Ireland. Variance components for each trait across age at slaughter were estimated using sire random regression models. Heritability estimates of carcass traits across ages at slaughter varied depending on gender (heifers, steers, young bulls) and the trait in question, and ranged from 0.08 (± 0.02) to 0.34 (± 0.02) for carcass weight, from 0.24 (± 0.02) to 0.42 (± 0.02) for conformation score and from 0.16 (± 0.03) to 0.40 (± 0.02) for fat score. Genetic correlations between traits across ages at slaughter were all significantly less than unity, indicating that different genetic mechanisms control these traits across life. The results from chapter 2 show that genetic variability in the progeny growth trajectory of sires exists and that this variability in the growth profiles of sires for carcass traits may be exploited in breeding programmes. As carcass traits are a function of both the genetics of the animal and the environment in which the animal is reared, chapter 3 aimed to quantify the contribution of the herd environment to the same three beef carcass traits, with particular emphasis on generating finishing herd-specific profiles for carcass traits across different ages at slaughter. The data analysed in chapter 3 was from animals slaughtered in UK abattoirs. Genetic and finishing-herd-year of slaughter parameters were generated using random regression analysis. Across slaughter age and gender, the proportion of phenotypic variance accounted for by finishing-herd-year of slaughter variance was between 30.83%-71.48% for carcass weight, 21.38%-26.29% for conformation score and between 10.88%-44.04% for fat score. These parameters indicate that the finishing herd environment is an at least equally important contributor to carcass trait variability as the genetic background of animals, and amenable to improvement with appropriate management practices. The final study of the thesis was to investigate the feasibility of across-country carcass trait genetic evaluations. Examination of the level of genetic connectedness between Ireland and the UK found 225 distinct bulls common to both countries. These common bulls were related to 80,707 Irish and 23,162 UK animals with carcass records in each population. Genetic correlations for carcass traits between Ireland and the UK were almost unity, ranging from 0.92 (± 0.31) for fat score to 0.96 (± 0.17) for carcass weight, indicating that the carcass traits recorded in both countries are genetically essentially equivalent. These strong genetic correlations between carcass traits in both countries enabled the direct pooling of carcass data for the purpose of across-country genetic evaluations (breeding value estimation). An increased rate of genetic gain for carcass traits per generation was predicted from across-country selection compared to within country selection ranging from 2% (conformation score in Ireland) to 33.77% (conformation score in the UK). This improved gain was primarily due to greater intensity of selection and somewhat more accurate estimated breeding values when carcass records and pedigree information from both countries were combined. The results presented in this thesis demonstrate that routinely collected abattoir data in Ireland and the UK can be exploited to produce additional selection and on-farm management tools. The results also show that access to across-country carcass trait genetic evaluations would allow UK and Irish beef farmers to make more informed decisions on the selection of seed stock needed to increase genetic gain and profits. Outcomes of this thesis pave the way to improvements in national carcass traits genetic evaluations in Ireland and the UK based on appropriate age at slaughter and also demonstrate the feasibility of across-country carcass trait genetic evaluations between Ireland and the UK. The scope for further areas of research includes the identification of specific management practices for optimal herd performance for carcass traits. Additionally, across-country carcass trait genetic evaluations based on random regression models across different ages at slaughter would also be of benefit to beef producers in Ireland and the UK. Finally, the viability of across-country genetic evaluations for additional carcass traits, such as carcass cut weights should be explored.
667

Dimension theory of random self-similar and self-affine constructions

Troscheit, Sascha January 2017 (has links)
This thesis is structured as follows. Chapter 1 introduces fractal sets before recalling basic mathematical concepts from dynamical systems, measure theory, dimension theory and probability theory. In Chapter 2 we give an overview of both deterministic and stochastic sets obtained from iterated function systems. We summarise classical results and set most of the basic notation. This is followed by the introduction of random graph directed systems in Chapter 3, based on the single authored paper [T1] to be published in Journal of Fractal Geometry. We prove that these attractors have equal Hausdorff and upper box-counting dimension irrespective of overlaps. It follows that the same holds for the classical models introduced in Chapter 2. This chapter also contains results about the Assouad dimensions for these random sets. Chapter 4 is based on the single authored paper [T2] and establishes the box-counting dimension for random box-like self-affine sets using some of the results and the notation developed in Chapter 3. We give some examples to illustrate the results. In Chapter 5 we consider the Hausdorff and packing measure of random attractors and show that for reasonable random systems the Hausdorff measure is zero almost surely. We further establish bounds on the gauge functions necessary to obtain positive or finite Hausdorff measure for random homogeneous systems. Chapter 6 is based on a joint article with J. M. Fraser and J.-J. Miao [FMT] to appear in Ergodic Theory and Dynamical Systems. It is chronologically the first and contains results that were extended in the paper on which Chapter 3 is based. However, we will give some simpler, alternative proofs in this section and crucially also find the Assouad dimension of some random self-affine carpets and show that the Assouad dimension is always `maximal' in both measure theoretic and topological meanings.
668

rstream: Streams of Random Numbers for Stochastic Simulation

L'Ecuyer, Pierre, Leydold, Josef January 2005 (has links) (PDF)
The package rstream provides a unified interface to streams of random numbers for the R statistical computing language. Features are: * independent streams of random numbers * substreams * easy handling of streams (initialize, reset) * antithetic random variates The paper describes this packages and demonstrates an simple example the usefulness of this approach. / Series: Preprint Series / Department of Applied Statistics and Data Processing
669

Paralelização do algoritmo de geração de redes aleatórias contínuas por Simulated Annealing / Paralelization of the algorithm to generate continuous random network using Simulated Annealing

Romano, Gustavo January 2008 (has links)
Esse trabalho tem dois objetivos principais: o primeiro deles consiste em apresentar o estado da arte sobre processos de otimização combinatorial dando uma ênfase especial ao método Simulated Annealing (SA). São apresentados seu histórico, funcionalidades, algoritmo genérico e propostas de paralelização presentes na literatura. Além disso, é apresentado o algoritmo de geração de redes aleatórias contínuas, algoritmo, esse, projetado por pesquisadores do Instituto de Física da UFRGS que utiliza o método SA para gerar redes que atendam certas restrições. O segundo objetivo consiste empropor a paralelização desse algoritmo visando diminuir significativamente o tempo de geração de cada rede, que com o algoritmo seqüencial chega a demorar mais de um mês. Nessa etapa foi utilizada uma adaptação de um dos métodos propostos pela literatura juntamente com a técnica de divisão de domínio. Os resultados obtidos mostraram-se satisfatórios tanto em relação à qualidade numérica quanto à diminuição do tempo de processamento. Além disso, discute-se no trabalho a genericidade da proposta de paralelização a outros problemas baseados em SA. / This work has two main goals: the first one is to present the state of the art on combinatorial optimization processes, with a special emphasis to the Simulated Annealing (SA) method. The work presents its history, features, generic algorithm and proposed parallelization present in the literature. Moreover, the algorithm to generate random networks continued is presented. This algorithm was designed by researchers of the UFRGS Physics Institute and it uses the SA method. The second goal of this work is to propose a parallelization for this algorithm in order to decrease significantly the generation time of each network, that with the sequential algorithm reaches more than months. To do that was used an adaptation of one of the methods proposed by literature together with the domain partitioning technical. The results were satisfactory in terms of the numerical quality and in the decrease of the processing time. In addition, this work discusses the genericity of the proposed parallelization to other problems based on SA.
670

Essays in nonparametric econometrics and infinite dimensional mathematical statistics / Ensaios em econometria não-paramétrica e estatística matemática em dimensão infinita

Horta, Eduardo de Oliveira January 2015 (has links)
A presente Tese de Doutorado é composta de quatro artigos científicos em duas áreas distintas. Em Horta, Guerre e Fernandes (2015), o qual constitui o Capítulo 2 desta Tese, é proposto um estimador suavizado no contexto de modelos de regressão quantílica linear (Koenker e Basset, 1978). Uma representação de Bahadur-Kiefer uniforme é obtida, a qual apresenta uma ordem assintótica que domina aquela correspondente ao estimador clássico. Em seguida, prova-se que o viés associado à suavização é negligenciável, no sentido de que o termo de viés é equivalente, em primeira ordem, ao verdadeiro parâmetro. A taxa precisa de convergência é dada, a qual pode ser controlada uniformemente pela escolha do parâmetro de suavização. Em seguida, são estudadas propriedades de segunda ordem do estimador proposto, em termos do seu erro quadrático médio assintótico, e mostra-se que o estimador suavizado apresenta uma melhoria em relação ao usual. Como corolário, tem-se que o estimador é assintoticamente normal e consistente à ordem p n. Em seguida, é proposto um estimador consistente para a matriz de covariância assintótica, o qual não depende de estimação de parâmetros auxiliares e a partir do qual pode-se obter diretamente intervalos de confiança assintóticos. A qualidade do método proposto é por fim ilustrada em um estudo de simulação. Os artigos Horta e Ziegelmann (2015a, 2015b, 2015c) se originam de um ímpeto inicial destinado a generalizar os resultados de Bathia et al. (2010). Em Horta e Ziegelmann (2015a), Capítulo 3 da presente Tese, é investigada a questão de existência de certos processos estocásticos, ditos processos conjugados, os quais são conduzidos por um segundo processo cujo espaço de estados tem como elementos medidas de probabilidade. Através dos conceitos de coerência e compatibilidade, obtémse uma resposta afirmativa à questão anterior. Baseado nas noções de medida aleatória (Kallenberg, 1973) e desintegração (Chang e Pollard, 1997; Pollard, 2002), é proposto um método geral para construção de processos conjugados. A teoria permite um rico conjunto de exemplos, e inclui uma classe de modelos de mudança de regime. Em Horta e Ziegelmann (2015b), Capítulo 4 desta Tese, é proposto – em relação com a construção obtida em Horta e Ziegelmann (2015a) – o conceito de processo fracamente conjugado: um processo estocástico real a tempo contínuo, conduzido por uma sequência de funções de distribuição aleatórias, ambos conectados por uma condição de compatibilidade a qual impõe que aspectos da distribuição do primeiro processo são divisíveis em uma quantidade enumerável de ciclos, dentro dos quais este tem como marginais, precisamente, o segundo processo. Em seguida, mostra-se que a metodologia de Bathia et al. (2010) pode ser aplicada para se estudar a estrutura de dependência de processos fracamente conjugados, e com isso obtém-se resultados de consistência à ordem p n para os estimadores que surgem naturalmente na teoria. Adicionalmente, a metodologia é ilustrada através de uma implementação a dados financeiros. Especificamente, o método proposto permite que características da dinâmica das distribuições de processos de retornos sejam traduzidas em termos de um processo escalar latente, a partir do qual podem ser obtidas previsões de quantidades associadas a essas distribuições. Em Horta e Ziegelmann (2015c), Capítulo 5 da presente Tese, são obtidos resultados de consistência à ordem p n em relação à estimação de representações espectrais de operadores de autocovariância de séries de tempo Hilbertianas estacionárias, em um contexto de medições imperfeitas. Os resultados são uma generalização do método desenvolvido em Bathia et al. (2010), e baseiam-se no importante fato de que elementos aleatórios em um espaço de Hilbert separável são quase certamente ortogonais ao núcleo de seu respectivo operador de covariância. É dada uma prova direta deste fato. / The present Thesis is composed of 4 research papers in two distinct areas. In Horta, Guerre, and Fernandes (2015), which constitutes Chapter 2 of this Thesis, we propose a smoothed estimator in the framework of the linear quantile regression model of Koenker and Bassett (1978). A uniform Bahadur-Kiefer representation is provided, with an asymptotic rate which dominates the standard quantile regression estimator. Next, we prove that the bias introduced by smoothing is negligible in the sense that the bias term is firstorder equivalent to the true parameter. A precise rate of convergence, which is controlled uniformly by choice of bandwidth, is provided. We then study second-order properties of the smoothed estimator, in terms of its asymptotic mean squared error, and show that it improves on the usual estimator when an optimal bandwidth is used. As corollaries to the above, one obtains that the proposed estimator is p n-consistent and asymptotically normal. Next, we provide a consistent estimator of the asymptotic covariance matrix which does not depend on ancillary estimation of nuisance parameters, and from which asymptotic confidence intervals are straightforwardly computable. The quality of the method is then illustrated through a simulation study. The research papers Horta and Ziegelmann (2015a;b;c) are all related in the sense that they stem from an initial impetus of generalizing the results in Bathia et al. (2010). In Horta and Ziegelmann (2015a), Chapter 3 of this Thesis, we address the question of existence of certain stochastic processes, which we call conjugate processes, driven by a second, measure-valued stochastic process. We investigate primitive conditions ensuring existence and, through the concepts of coherence and compatibility, obtain an affirmative answer to the former question. Relying on the notions of random measure (Kallenberg (1973)) and disintegration (Chang and Pollard (1997), Pollard (2002)), we provide a general approach for construction of conjugate processes. The theory allows for a rich set of examples, and includes a class of Regime Switching models. In Horta and Ziegelmann (2015b), Chapter 4 of the present Thesis, we introduce, in relation with the construction in Horta and Ziegelmann (2015a), the concept of a weakly conjugate process: a continuous time, real valued stochastic process driven by a sequence of random distribution functions, the connection between the two being given by a compatibility condition which says that distributional aspects of the former process are divisible into countably many cycles during which it has precisely the latter as marginal distributions. We then show that the methodology of Bathia et al. (2010) can be applied to study the dependence structure of weakly conjugate processes, and therewith provide p n-consistency results for the natural estimators appearing in the theory. Additionally, we illustrate the methodology through an implementation to financial data. Specifically, our method permits us to translate the dynamic character of the distribution of an asset returns process into the dynamics of a latent scalar process, which in turn allows us to generate forecasts of quantities associated to distributional aspects of the returns process. In Horta and Ziegelmann (2015c), Chapter 5 of this Thesis, we obtain p n-consistency results regarding estimation of the spectral representation of the zero-lag autocovariance operator of stationary Hilbertian time series, in a setting with imperfect measurements. This is a generalization of the method developed in Bathia et al. (2010). The generalization relies on the important property that centered random elements of strong second order in a separable Hilbert space lie almost surely in the closed linear span of the associated covariance operator. We provide a straightforward proof to this fact.

Page generated in 0.0471 seconds