Spelling suggestions: "subject:"random"" "subject:"fandom""
531 |
Vision-Based Localization Using Reliable Fiducial MarkersStathakis, Alexandros 05 January 2012 (has links)
Vision-based positioning systems are founded primarily on a simple image processing technique of identifying various visually significant key-points in an image and relating them to a known coordinate system in a scene. Fiducial markers are used as a means of providing the scene with a number of specific key-points, or features, such that computer vision algorithms can quickly identify them within a captured image. This thesis proposes a reliable vision-based positioning system which utilizes a unique pseudo-random fiducial marker. The marker itself offers 49 distinct feature points to be used in position estimation. Detection of the designed marker occurs after an integrated process of adaptive thresholding, k-means clustering, color classification, and data verification. The ultimate goal behind such a system would be for indoor localization implementation in low cost autonomous mobile platforms.
|
532 |
Core Structures in Random Graphs and HypergraphsSato, Cristiane Maria January 2013 (has links)
The k-core of a graph is its maximal subgraph with minimum degree at least k. The study of k-cores in random graphs was initiated by Bollobás in 1984 in connection to k-connected subgraphs of random graphs. Subsequently, k-cores and their properties have been extensively investigated in random graphs and hypergraphs, with the determination of the threshold for the emergence of a giant k-core, due to Pittel, Spencer and Wormald, as one of the most prominent results.
In this thesis, we obtain an asymptotic formula for the number of 2-connected graphs, as well as 2-edge-connected graphs, with given number of vertices and edges in the sparse range by exploiting properties of random 2-cores. Our results essentially cover the whole range for which asymptotic formulae were not described before. This is joint work with G. Kemkes and N. Wormald. By defining and analysing a core-type structure for uniform hypergraphs, we obtain an asymptotic formula for the number of connected 3-uniform hypergraphs with given number of vertices and edges in a sparse range. This is joint work with N. Wormald.
We also examine robustness aspects of k-cores of random graphs. More specifically, we investigate the effect that the deletion of a random edge has in the k-core as follows: we delete a random edge from the k-core, obtain the k-core of the resulting graph, and compare its order with the original k-core. For this investigation we obtain results for the giant k-core for Erdős-Rényi random graphs as well as for random graphs with minimum degree at least k and given number of vertices and edges.
|
533 |
On the principles and future of COM featuring : the random dot stereoimage technologyAlexandersson, Anders January 2003 (has links)
No description available.
|
534 |
The Multivariate Ahrens Sampling MethodKarawatzki, Roman January 2006 (has links) (PDF)
The "Ahrens method" is a very simple method for sampling from univariate distributions. It is based on rejection from piecewise constant hat functions. It can be applied analogously to the multivariate case where hat functions are used that are constant on rectangular domains. In this paper we investigate the case of distributions with so called orthounimodal densities. Technical implementation details as well as their practical limitations are discussed. The application to more general distributions is considered. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
535 |
Random walks and non-linear paths in macroeconomic time series. Some evidence and implications.Bevilacqua, Franco, vanZon, Adriaan January 2002 (has links) (PDF)
This paper investigates whether the inherent non-stationarity of macroeconomic time series is entirely due to a random walk or also to non-linear components. Applying the numerical tools of the analysis of dynamical systems to long time series for the US, we reject the hypothesis that these series are generated solely by a linear stochastic process. Contrary to the Real Business Cycle theory that attributes the irregular behavior of the system to exogenous random factors, we maintain that the fluctuations in the time series we examined cannot be explained only by means of external shocks plugged into linear autoregressive models. A dynamical and non-linear explanation may be useful for the double aim of describing and forecasting more accurately the evolution of the system. Linear growth models that find empirical verification on linear econometric analysis, are therefore seriously called in question. Conversely non-linear dynamical models may enable us to achieve a more complete information about economic phenomena from the same data sets used in the empirical analysis which are in support of Real Business Cycle Theory. We conclude that Real Business Cycle theory and more in general the unit root autoregressive models are an inadequate device for a satisfactory understanding of economic time series. A theoretical approach grounded on non-linear metric methods, may however allow to identify non-linear structures that endogenously generate fluctuations in macroeconomic time series. (authors' abstract) / Series: Working Papers Series "Growth and Employment in Europe: Sustainability and Competitiveness"
|
536 |
New generators of normal and Poisson deviates based on the transformed rejection methodHörmann, Wolfgang January 1992 (has links) (PDF)
The transformed rejection method uses inversion to sample from the dominating density of a rejection algorithm. But in contrast to the usual method it is enough to know the inverse distribution function F^(-1)(x) of the dominating density. This idea can be applied to various continuous (e.g. normal, Cauchy and exponential) and discrete (e.g. binomial and Poisson) distributions with high acceptance probabilities. The resulting algorithms are short, simple and fast. Even more important is the fact that the quality of the method when used in combination with a linear congruential uniform generator is high compared with the quality of the ratio of uniforms method. In addition transformed rejection can be easily employed for correlation induction. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
537 |
Modeling Dynamic Network with Centrality-based Logistic RegressionKulmatitskiy, Nikolay 09 1900 (has links)
Statistical analysis of network data is an active field of study, in which researchers inves-
tigate graph-theoretic concepts and various probability models that explain the behaviour
of real networks. This thesis attempts to combine two of these concepts: an exponential
random graph and a centrality index. Exponential random graphs comprise the most useful
class of probability models for network data. These models often require the assumption
of a complex dependence structure, which creates certain difficulties in the estimation of
unknown model parameters. However, in the context of dynamic networks the exponential
random graph model provides the opportunity to incorporate a complex network structure
such as centrality without the usual drawbacks associated with parameter estimation. The
thesis employs this idea by proposing probability models that are equivalent to the logistic
regression models and that can be used to explain behaviour of both static and dynamic
networks.
|
538 |
Frozen-State Hierarchical AnnealingCampaigne, Wesley January 2012 (has links)
There is significant interest in the synthesis of discrete-state random fields, particularly those possessing structure over a wide range of scales. However, given a model on some finest, pixellated scale, it is computationally very difficult to synthesize both large and small-scale structures, motivating research into hierarchical methods.
This thesis proposes a frozen-state approach to hierarchical modelling, in which simulated annealing is performed on each scale, constrained by the state estimates at the parent scale. The approach leads significant advantages in both modelling flexibility and computational complexity. In particular, a complex structure can be realized with very simple, local, scale-dependent models, and by constraining the domain to be annealed at finer scales to only the uncertain portions of coarser scales, the approach leads to huge improvements in computational complexity. Results are shown for synthesis problems in porous media.
|
539 |
On the Asymptotic Number of Active Links in a Random NetworkZoghalchi, Farshid January 2012 (has links)
A network of n transmitters and n receivers is considered. We assume that transmitter
i aims to send data to its designated destination, receiver i. Communications occur in
a single-hop fashion and destination nodes are simple linear receivers without multi-user
detection. Therefore, in each time slot every source node can only talk to one other
destination node. Thus, there is a total of n communication links. An important question
now arises. How many links can be active in such a network so that each of them supports
a minimum rate Rmin? This dissertation is devoted to this problem and tries to solve it
in two di erent settings, dense and extended networks. In both settings our approach is
asymptotic, meaning, we only examine the behaviour of the network when the number
of nodes tends to in nity. We are also interested in the events that occur asymptotically
almost surely (a.a.s.), i.e., events that have probabilities approaching one as the size of
the networks gets large. In the rst part of the thesis, we consider a dense network where
fading is the dominant factor a ecting the quality of transmissions. Rayliegh channels are
used to model the impact of fading. It is shown that a.a.s. log(n)^2 links can simultaneously
maintain Rmin and thus be active. In the second part, an extended network is considered
where nodes are distant from each other and thus, a more complete model must take internode
distances into account. We will show that in this case, almost all of the links can be
active while maintaining the minimum rate.
|
540 |
Comparison of Imputation Methods on Estimating Regression Equation in MNAR MechanismPan, Wensi January 2012 (has links)
In this article, we propose an overview of missing data problem, introduce three missing data mechanisms and study general solutions to them when estimating a linear regression equation. When we have partly missing data, there are two common ways to solve this problem. One way is to ignore those records with missing values. Another method is to impute those observations being missed. Imputation methods arepreferred since they provide full datasets. We observed that there is not a general imputation solution in missing not at random (MNAR) mechanism. In order to check the performance of existing imputation methods in a regression model, a simulation study is set up. Listwise deletion, simple imputation and multiple imputation are selected into comparison which focuses on the effect on parameter estimates and standard errors. The simulation results illustrate that the listwise deletion provides reliable parameter estimates. Simple imputation performs better than multiple imputation in a model with a high determination coefficient. Multiple imputation,which offers a suitable solution for missing at random (MAR), is not valid for MNAR.
|
Page generated in 0.0318 seconds