421 |
Numerical solutions of continuous wave beam in nonlinear mediaHuang, Jeffrey 01 January 1987 (has links)
Deformation of a Gaussian beam is observed when it propagates through a plasma. Self-focusing of the beam may be observed when the intensity of the laser increases the index of refraction of plasma gas.
Due to the difficulties in solving the nonlinear partial differential equation in Maxwell's wave equation, a numerical technique has been developed in favor of the traditional analytical method. Result of numerical solution shows consistency with the analytical method. This further suggests the validity of the numerical technique employed.
A three dimensional graphics package was used to depict the numerical data obtained from the calculation. Plots from the data further show the deformation of the Gaussian beam as it propagates through the plasma gas.
|
422 |
A Radial Basis Neural Network for the Analysis of Transportation DataAguilar, David P 04 November 2004 (has links)
This thesis describes the implementation of a Radial Basis Function (RBF) network to be used in predicting the effectiveness of various strategies for reducing the Vehicle Trip Rate (VTR) of a worksite. Three methods of learning were utilized in training the Gaussian hidden units of the network, those being a) output weight adjustment using the Delta rule b) adjustable reference vectors in conjunction with weight adjustment, and c) a combination of adjustable centers and adjustable sigma values for each RBF neuron with the Delta rule. The justification for utilizing each of the more advanced levels of training is provided using a series of tests and performance comparisons.
The network architecture is then selected based upon a series of initial trials for an optimum number of hidden Radial Basis neurons. In a similar manner, the training time is determined after finding a maximum number of epochs during which there is a significant change in the SSE.
The network was compared for effectiveness against each of the following methods of data analysis: force-entered regression, backward regression, forward regression, stepwise regression, and two types of back-propagation networks based upon the attributes discovered to be most predictive by these regression techniques.
A comparison of the learning methods used on the Radial Basis network shows the third learning strategy to be the most efficient for training, yielding the lowest sum of squared errors (SSE) in the shortest number of training epochs. The result of comparing the RBF implementation against the other methods mentions shows the superiority of the Radial Basis method for predictive ability.
|
423 |
Certain Diagonal Equations over Finite FieldsSze, Christopher 29 May 2009 (has links)
Let Fqt be the finite field with qt elements and let F*qt be its multiplicative group. We study the diagonal equation axq−1 + byq−1 = c, where a,b and c ∈ F*qt. This equation can be written as xq−1+αyq−1 = β, where α, β ∈ F ∗ q t . Let Nt(α, β) denote the number of solutions (x,y) ∈ F*qt × F*qt of xq−1 + αyq−1 = β and I(r; a, b) be the number of monic irreducible polynomials f ∈ Fq[x] of degree r with f(0) = a and f(1) = b. We show that Nt(α, β) can be expressed in terms of I(r; a, b), where r | t and a, b ∈ F*q are related to α and β. A recursive formula for I(r; a, b) will be given and we illustrate this by computing I(r; a, b) for 2 ≤ r ≤ 4. We also show that N3(α, β) can be expressed in terms of the number of monic irreducible cubic polynomials over Fq with prescribed trace and norm. Consequently, N3(α, β) can be expressed in terms of the number of rational points on a certain elliptic curve. We give a proof that given any a, b ∈ F*q and integer r ≥ 3, there always exists a monic irreducible polynomial f ∈ Fq[x] of degree r such that f(0) = a and f(1) = b. We also use the result on N2(α, β) to construct a new family of planar functions.
|
424 |
An investigation of long-term dependence in time-series dataEllis, Craig, University of Western Sydney, Macarthur, Faculty of Business and Technology January 1998 (has links)
Traditional models of financial asset yields are based on a number of simplifying assumptions. Among these are the primary assumptions that changes in asset yields are independent, and that the distribution of these yields is approximately normal. The development of financial asset pricing models has also incorporated these assumptions. A general feature of the pricing models is that the relationship between the model variables is fundamentally linear. Recent empirical research has however identified the possibility for these relations to be non-linear. The empirical research focused primarily on methodological issues relating to the application of the classical rescaled adjusted range. Some of the major issues investigated were: the use of overlapping versus contiguous subseries lengths in the calculation of the statistic's Hurst exponent; the asymptotic distribution of the Hurst exponent for Gaussian time-series and long-term dependent fBm's; matters pertaining to the estimation of the expected rescaled adjusted range. Empirical research in this thesis also considered alternate applications of rescaled range analysis, other than modelling non-linear long-term dependence. Issues relating to the use of the technique for estimating long-term dependent ARFIMA processes, and some implications of long-term dependence for financial time-series have both been investigated. Overall, the general shape of the asymptotic distribution of the Hurst exponent has been shown to be invariant to the level of dependence in the underlying series. While the rescaled adjusted range is a biased indicator of the level of long-term dependence in simulated time-series, it was found that the bias could be efficiently modelled. For real time-series containing structured short-term dependence, the bias was shown to be inconsistent with the simulated results. / Doctor of Philosophy (PhD)
|
425 |
Asymptotic methods for tests of homogeneity for finite mixture modelsStewart, Michael Ian January 2002 (has links)
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
|
426 |
Fundamental Estimation and Detection Limits in Linear Non-Gaussian SystemsHendeby, Gustaf January 2005 (has links)
<p>Many methods used for estimation and detection consider only the mean and variance of the involved noise instead of the full noise descriptions. One reason for this is that the mathematics is often considerably simplified this way. However, the implications of the simplifications are seldom studied, and this thesis shows that if no approximations are made performance is gained. Furthermore, the gain is quantified in terms of the useful information in the noise distributions involved. The useful information is given by the intrinsic accuracy, and a method to compute the intrinsic accuracy for a given distribution, using Monte Carlo methods, is outlined.</p><p>A lower bound for the covariance of the estimation error for any unbiased estimator s given by the Cramér-Rao lower bound (CRLB). At the same time, the Kalman filter is the best linear unbiased estimator (BLUE) for linear systems. It is in this thesis shown that the CRLB and the BLUE performance are given by the same expression, which is parameterized in the intrinsic accuracy of the noise. How the performance depends on the noise is then used to indicate when nonlinear filters, e.g., a particle filter, should be used instead of a Kalman filter. The CRLB results are shown, in simulations, to be a useful indication of when to use more powerful estimation methods. The simulations also show that other techniques should be used as a complement to the CRLB analysis to get conclusive performance results.</p><p>For fault detection, the statistics of the asymptotic generalized likelihood ratio (GLR) test provides an upper bound on the obtainable detection performance. The performance is in this thesis shown to depend on the intrinsic accuracy of the involved noise. The asymptotic GLR performance can then be calculated for a test using the actual noise and for a test using the approximative Gaussian noise. Based on the difference in performance, it is possible to draw conclusions about the quality of the Gaussian approximation. Simulations show that when the difference in performance is large, an exact noise representation improves the detection. Simulations also show that it is difficult to predict the exact influence on the detection performance caused by substituting the system noise with Gaussian noise approximations.</p> / <p>Många metoder som används i estimerings- och detekteringssammanhang tar endast hänsyn till medelvärde och varians hos ingående brus istället för att använda en fullständig brusbeskrivning. En av anledningarna till detta är att den förenklade brusmodellen underlättar många beräkningar. Dock studeras sällan de effekter förenklingarna leder till. Denna avhandling visar att om inga förenklingar görs kan prestandan förbättras och det visas också hur förbättringen kan relateras till den intressanta informationen i det involverade bruset. Den intressanta informationen är den inneboende noggrannheten (eng. intrinsic accuracy) och ett sätt för att bestämma den inneboende noggrannheten hos en given fördelning med hjälp av Monte-Carlo-metoder presenteras.</p><p>Ett mått på hur bra en estimator utan bias kan göras ges av Cramér-Raos undre gräns (CRLB). Samtidigt är det känt att kalmanfiltret är den bästa lineära biasfria estimatorn (BLUE) för lineära system. Det visas här att CRLB och BLUE-prestanda ges av samma matematiska uttryck där den inneboende noggrannheten ingår som en parameter. Kunskap om hur informationen påverkar prestandan kan sedan användas för att indikera när ett olineärt filter, t.ex. ett partikelfilter, bör användas istället för ett kalmanfilter. Med hjälp av simuleringar visas att CRLB är ett användbart mått för att indikera när mer avancerade metoder kan vara lönsamma. Simuleringarna visar dock också att CRLB-analysen bör kompletteras med andra tekniker för att det ska vara möjligt att dra definitiva slutsatser.</p><p>I fallet feldetektion ger de asymptotiska egenskaperna hos den generaliserade sannolikhetskvoten (eng. generalized likelihood ratio, GLR) en övre gräns för hur bra detektorer som kan konstrueras. Det visas här hur den övre gränsen beror på den inneboende noggrannheten hos det aktuella bruset. Genom att beräkna asymptotisk GLR-testprestanda för det sanna bruset och för en gaussisk brusapproximation går det att dra slutsatser om huruvida den gaussiska approximationen är tillräckligt bra för att kunna användas. I simuleringar visas att det är lönsamt att använda sig av en exakt brusbeskrivning om skillnaden i prestanda är stor mellan de båda fallen. Simuleringarna indikerar också att det kan vara svårt att förutsäga den exakta effekten av en gaussisk brusapproximation.</p> / Report code: LiU-Tek-Lic-2005:54
|
427 |
Foreground Segmentation of Moving ObjectsMolin, Joel January 2010 (has links)
<p>Foreground segmentation is a common first step in tracking and surveillance applications. The purpose of foreground segmentation is to provide later stages of image processing with an indication of where interesting data can be found. This thesis is an investigation of how foreground segmentation can be performed in two contexts: as a pre-step to trajectory tracking and as a pre-step in indoor surveillance applications.</p><p>Three methods are selected and detailed: a single Gaussian method, a Gaussian mixture model method, and a codebook method. Experiments are then performed on typical input video using the methods. It is concluded that the Gaussian mixture model produces the output which yields the best trajectories when used as input to the trajectory tracker. An extension is proposed to the Gaussian mixture model which reduces shadow, improving the performance of foreground segmentation in the surveillance context.</p>
|
428 |
Statistical Background Models with Shadow Detection for Video Based TrackingWood, John January 2007 (has links)
<p>A common problem when using background models to segment moving objects from video sequences is that objects cast shadow usually significantly differ from the background and therefore get detected as foreground. This causes several problems when extracting and labeling objects, such as object shape distortion and several objects merging together. The purpose of this thesis is to explore various possibilities to handle this problem.</p><p>Three methods for statistical background modeling are reviewed. All methods work on a per pixel basis, the first is based on approximating the median, the next on using Gaussian mixture models, and the last one is based on channel representation. It is concluded that all methods detect cast shadows as foreground.</p><p>A study of existing methods to handle cast shadows has been carried out in order to gain knowledge on the subject and get ideas. A common approach is to transform the RGB-color representation into a representation that separates color into intensity and chromatic components in order to determine whether or not newly sampled pixel-values are related to the background. The color spaces HSV, IHSL, CIELAB, YCbCr, and a color model proposed in the literature (Horprasert et al.) are discussed and compared for the purpose of shadow detection. It is concluded that Horprasert's color model is the most suitable for this purpose.</p><p>The thesis ends with a proposal of a method to combine background modeling using Gaussian mixture models with shadow detection using Horprasert's color model. It is concluded that, while not perfect, such a combination can be very helpful in segmenting objects and detecting their cast shadow.</p>
|
429 |
Variable selection and neural networks for high-dimensional data analysis: application in infrared spectroscopy and chemometricsBenoudjit, Nabil 24 November 2003 (has links)
This thesis focuses particularly on the application of chemometrics in the field of
analytical chemistry. Chemometrics (or multivariate analysis) consists in finding a relationship
between two groups of variables, often called dependent and independent variables.
In infrared spectroscopy for instance, chemometrics consists in the prediction of a quantitative
variable (the obtention of which is delicate, requiring a chemical analysis and a qualified
operator), such as the concentration of a component present in the studied product from spectral
data measured on various wavelengths or wavenumbers (several hundreds, even several thousands).
In this research we propose a methodology in the field of chemometrics to handle the chemical data (spectrophotometric data)
which are often in high dimension.
To handle these data, we first propose a new incremental method (step-by-step) for the selection
of spectral data using linear and non-linear
regression based on the combination of three principles: linear or non-linear regression,
incremental procedure for the variable selection, and use of a validation set. This procedure allows
on one hand to benefit from the advantages of non-linear methods to predict chemical data
(there is often a non-linear relationship between dependent and independent variables), and on the
other hand to avoid the overfitting phenomenon, one of the most crucial problems encountered with
non-linear models. Secondly, we propose to improve the previous method by a judicious
choice of the first selected variable, which has a very important influence on the final
performances of the prediction. The idea is to use a measure of the mutual information between
the independent and dependent variables to select the first one; then the previous
incremental method (step-by-step) is used to select the next variables. The variable selected
by mutual information can have a good interpretation from the spectrochemical point of view, and
does not depend on the data distribution in the training and validation sets.
On the contrary, the traditional chemometric linear methods such as PCR or PLSR produce new
variables which do not have any interpretation from the spectrochemical point of view.
Four real-life datasets (wine, orange juice, milk powder and apples) are presented in order to
show the efficiency and advantages of both proposed procedures compared to the traditional
chemometric linear methods often used, such as MLR, PCR and PLSR.
|
430 |
Picking Parts out of a BinHorn, Berthold K.P., Ikeuchi, Katsushi 01 October 1983 (has links)
One of the remaining obstacles to the widespread application of industrial robots is their inability to deal with parts that are not precisely positioned. In the case of manual assembly, components are often presented in bins. Current automated systems, on the other hand, require separate feeders which present the parts with carefully controlled position and attitude. Here we show how results in machine vision provide techniques for automatically directing a mechanical manipulator to pick one object at a time out of a pile. The attitude of the object to be picked up is determined using a histogram of the orientations of visible surface patches. Surface orientation, in turn, is determined using photometric stereo applied to multiple images. These images are taken with the same camera but differing lighting. The resulting needle map, giving the orientations of surface patches, is used to create an orientation histogram which is a discrete approximation to the extended Gaussian image. This can be matched against a synthetic orientation histogram obtained from prototypical models of the objects to be manipulated. Such models may be obtained from computer aided design (CAD) databases. The method thus requires that the shape of the objects be described, but it is not restricted to particular types of objects.
|
Page generated in 0.12 seconds