• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 97
  • 40
  • 10
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 331
  • 40
  • 38
  • 36
  • 29
  • 29
  • 28
  • 27
  • 25
  • 24
  • 24
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Multi-camera uncalibrated visual servoing

Marshall, Matthew Q. 20 September 2013 (has links)
Uncalibrated visual servoing (VS) can improve robot performance without needing camera and robot parameters. Multiple cameras improve uncalibrated VS precision, but no works exist simultaneously using more than two cameras. The first data for uncalibrated VS simultaneously using more than two cameras are presented. VS performance is also compared for two different camera models: a high-cost camera and a low-cost camera, the difference being image noise magnitude and focal length. A Kalman filter based control law for uncalibrated VS is introduced and shown to be stable under the assumptions that robot joint level servo control can reach commanded joint offsets and that the servoing path goes through at least one full column rank robot configuration. Adaptive filtering by a covariance matching technique is applied to achieve automatic camera weighting, prioritizing the best available data. A decentralized sensor fusion architecture is utilized to assure continuous servoing with camera occlusion. The decentralized adaptive Kalman filter (DAKF) control law is compared to a classical method, Gauss-Newton, via simulation and experimentation. Numerical results show that DAKF can improve average tracking error for moving targets and convergence time to static targets. DAKF reduces system sensitivity to noise and poor camera placement, yielding smaller outliers than Gauss-Newton. The DAKF system improves visual servoing performance, simplicity, and reliability.
212

Single Killing Vector Gauss-Bonnet Boson Stars and Single Killing Vector Hairy Black Holes in D>5 Odd Dimensions

Henderson, Laura January 2014 (has links)
I construct anti-de Sitter boson stars in Einstein-Gauss-Bonnet gravity coupled to a (D-1)/(2)-tuplet of complex massless scalar field both perturbativelyand numerically in D=5,7,9,11 dimensions. Due to the choice of scalar fields, these solutions possess just a single helical Killing symmetry. For each choice of the Gauss-Bonnet parameter &alpha;&#8800;&alpha;_cr, the central energy density at the center of the boson star, q_0 completely characterizes the one parameter family of solutions. These solutions obey the first law of thermodynamics, in the case of the numerics, to within 1 part in 10^6. I describe the dependence of the boson star mass, angular momentum and angular velocity on &alpha; and on the dimensionality. For &alpha;<&alpha;_cr and D>5, these quantities exhibit damped oscillations about finite central values and the central energy density tends to infinity. The Kretschmann invariant at the center of the boson star diverges in the limit of diverging central energy. This contrasts the D=5 case, where the Kretschmann invariant diverges at a finite value of the central energy density. Solutions where &alpha;<&alpha;_cr, correspond to negative mass boson stars, and the for all dimensions the boson star mass and angular momentum decrease exponentially as the central energy density tends toward infinity with the Kretschmann invariant diverging only when in the limit the central energy density diverges. I also briefly discuss the difficulties of numerically obtaining single Killing vector hairy black hole solutions and present the explicit boundary conditions for both Einstein gravity and Einstein-Gauss-Bonnet gravity.
213

Improved critical values for extreme normalized and studentized residuals in Gauss-Markov models / Verbesserte kritische Werte für extreme normierte und studentisierte Verbesserungen in Gauß-Markov-Modellen

Lehmann, Rüdiger 06 August 2014 (has links) (PDF)
We investigate extreme studentized and normalized residuals as test statistics for outlier detection in the Gauss-Markov model possibly not of full rank. We show how critical values (quantile values) of such test statistics are derived from the probability distribution of a single studentized or normalized residual by dividing the level of error probability by the number of residuals. This derivation neglects dependencies between the residuals. We suggest improving this by a procedure based on the Monte Carlo method for the numerical computation of such critical values up to arbitrary precision. Results for free leveling networks reveal significant differences to the values used so far. We also show how to compute those critical values for non‐normal error distributions. The results prove that the critical values are very sensitive to the type of error distribution. / Wir untersuchen extreme studentisierte und normierte Verbesserungen als Teststatistik für die Ausreißererkennung im Gauß-Markov-Modell von möglicherweise nicht vollem Rang. Wir zeigen, wie kritische Werte (Quantilwerte) solcher Teststatistiken von der Wahrscheinlichkeitsverteilung einer einzelnen studentisierten oder normierten Verbesserung abgeleitet werden, indem die Irrtumswahrscheinlichkeit durch die Anzahl der Verbesserungen dividiert wird. Diese Ableitung vernachlässigt Abhängigkeiten zwischen den Verbesserungen. Wir schlagen vor, diese Prozedur durch Einsatz der Monte-Carlo-Methode zur Berechnung solcher kritischen Werte bis zu beliebiger Genauigkeit zu verbessern. Ergebnisse für freie Höhennetze zeigen signifikante Differenzen zu den bisher benutzten Werten. Wir zeigen auch, wie man solche Werte für nicht-normale Fehlerverteilungen berechnet. Die Ergebnisse zeigen, dass die kritischen Werte sehr empfindlich auf den Typ der Fehlerverteilung reagieren.
214

Spectral Element Method for Pricing European Options and Their Greeks

Yue, Tianyao January 2012 (has links)
<p>Numerical methods such as Monte Carlo method (MCM), finite difference method (FDM) and finite element method (FEM) have been successfully implemented to solve financial partial differential equations (PDEs). Sophisticated computational algorithms are strongly desired to further improve accuracy and efficiency.</p><p>The relatively new spectral element method (SEM) combines the exponential convergence of spectral method and the geometric flexibility of FEM. This dissertation carefully investigates SEM on the pricing of European options and their Greeks (Delta, Gamma and Theta). The essential techniques, Gauss quadrature rules, are thoroughly discussed and developed. The spectral element method and its error analysis are briefly introduced first and expanded in details afterwards.</p><p>Multi-element spectral element method (ME-SEM) for the Black-Scholes PDE is derived on European put options with and without dividend and on a condor option with a more complicated payoff. Under the same Crank-Nicolson approach for the time integration, the SEM shows significant accuracy increase and time cost reduction over the FDM. A novel discontinuous payoff spectral element method (DP-SEM) is invented and numerically validated on a European binary put option. The SEM is also applied to the constant elasticity of variance (CEV) model and verified with the MCM and the valuation formula. The Stochastic Alpha Beta Rho (SABR) model is solved with multi-dimensional spectral element method (MD-SEM) on a European put option. Error convergence for option prices and Greeks with respect to the number of grid points and the time step is analyzed and illustrated.</p> / Dissertation
215

Evaluation of TDOA based Football Player’s Position Tracking Algorithm using Kalman Filter

Kanduri, Srinivasa Rangarajan Mukhesh, Medapati, Vinay Kumar Reddy January 2018 (has links)
Time Difference Of Arrival (TDOA) based position tracking technique is one of the pinnacles of sports tracking technology. Using radio frequency com-munication, advanced filtering techniques and various computation methods, the position of a moving player in a virtually created sports arena can be iden-tified using MATLAB. It can also be related to player’s movement in real-time. For football in particular, this acts as a powerful tool for coaches to enhanceteam performance. Football clubs can use the player tracking data to boosttheir own team strengths and gain insight into their competing teams as well. This method helps to improve the success rate of Athletes and clubs by analyz-ing the results, which helps in crafting their tactical and strategic approach to game play. The algorithm can also be used to enhance the viewing experienceof audience in the stadium, as well as broadcast.In this thesis work, a typical football field scenario is assumed and an arrayof base stations (BS) are installed along perimeter of the field equidistantly.The player is attached with a radio transmitter which emits radio frequencythroughout the assigned game time. Using the concept of TDOA, the position estimates of the player are generated and the transmitter is tracked contin-uously by the BS. The position estimates are then fed to the Kalman filter, which filters and smoothens the position estimates of the player between the sample points considered. Different paths of the player as straight line, circu-lar, zig-zag paths in the field are animated and the positions of the player are tracked. Based on the error rate of the player’s estimated position, the perfor-mance of the Kalman filter is evaluated. The Kalman filter’s performance is analyzed by varying the number of sample points.
216

Extensions supersymétriques des équations structurelles des supervariétés plongées dans des superespaces

Bertrand, Sébastien 06 1900 (has links)
No description available.
217

Algoritmo de tomografia por impedância elétrica utilizando programação linear como método de busca da imagem. / Algorithm of electrical impedance tomography using linear programming as method of searching image.

Miguel Fernando Montoya Vallejo 14 November 2007 (has links)
A Tomografia por Impedância elétrica (TIE) tem como objetivo gerar imagens da distribuição de resistividade dentro de um domínio. A TIE injeta correntes em eletrodos alocados na fronteira do domínio e mede potenciais elétricos através dos mesmos eletrodos. A TIE é considerada um problema inverso, não-linear e mal posto. Atualmente, para gerar uma solução do problema inverso, existem duas classes de algoritmos para estimar a distribuição de resistividade no interior do domínio, os que estimam variações da distribuição de resistividade do domínio e os absolutos, que estimam a distribuição de resistividade. Variações da distribuição de resistividade são o resultado da solução de um sistema linear do tipo Ax = b. O objetivo do presente trabalho é avaliar o desempenho da Programação Linear (PL) na solução do sistema linear, avaliar o algoritmo quanto a propaga- ção de erros numéricos e avaliar os efeitos de restringir o espaço solução através de restrições de PL. Os efeitos do uso de Programação Linear é avaliado tanto em métodos que geram imagens de diferenças, como o Matriz de Sensibilidade, como em métodos absolutos, como o Gauss-Newton. Mostra-se neste trabalho que o uso da PL diminui o erro numérico propagado quando comparado ao uso do algoritmo LU Decomposition. Resulta também que reduzir o espaço solução, diretamente através de restrições de PL, melhora a resolução em resistividade e a resolução espacial da imagem quando comparado com o uso de LU Decomposition. / Electrical impedance tomography (EIT) generates images of the resistivity distribution of a domain. The EIT method inject currents through electrodes placed on the boundary of the domain and measures electric potentials through the same electrodes. EIT is considered an inverse problem, non-linear and ill-conditioned. There are two classes of algorithms to estimate the resistivity distribution inside the domain, difference images algorithms, which estimate resistivity distribution variations, and absolute images algorithms, which estimate the resistivity distribution. Resistivity distribution variations are the solution of a linear system, say Ax = b. In this work, the main objective is to evaluate the performance of Linear Programming (LP) solving an EIT linear system from the point of view of the numerical error propagation and the ability to constrain the solution space. The impact of using LP to solve an EIT linear system is evaluated on a difference image algorithm and on an absolute algorithm. This work shows that the use of LP diminishes the numerical error propagation compared to LU Decomposition. It is also shown that constraining the solution space through LP improves the resistivity resolution and the spatial resolution of the images when compared to LU Decomposition.
218

Random Variate Generation by Numerical Inversion When Only the Density Is Known

Derflinger, Gerhard, Hörmann, Wolfgang, Leydold, Josef January 2009 (has links) (PDF)
We present a numerical inversion method for generating random variates from continuous distributions when only the density function is given. The algorithm is based on polynomial interpolation of the inverse CDF and Gauss-Lobatto integration. The user can select the required precision which may be close to machine precision for smooth, bounded densities; the necessary tables have moderate size. Our computational experiments with the classical standard distributions (normal, beta, gamma, t-distributions) and with the noncentral chi-square, hyperbolic, generalized hyperbolic and stable distributions showed that our algorithm always reaches the required precision. The setup time is moderate and the marginal execution time is very fast and nearly the same for all distributions. Thus for the case that large samples with fixed parameters are required the proposed algorithm is the fastest inversion method known. Speed-up factors up to 1000 are obtained when compared to inversion algorithms developed for the specific distributions. This makes our algorithm especially attractive for the simulation of copulas and for quasi-Monte Carlo applications. <P> This paper is the revised final version of the working paper no. 78 of this research report series. / Series: Research Report Series / Department of Statistics and Mathematics
219

Online Supplement to "Random Variate Generation by Numerical Inversion When Only the Density Is Known"

Derflinger, Gerhard, Hörmann, Wolfgang, Leydold, Josef January 2009 (has links) (PDF)
This Online Supplement summarizes our computational experiences with Algorithm NINIGL presented in our paper "Random Variate Generation by Numerical Inversion when only the Density Is Known" (Report No. 90). It is a numerical inversion method for generating random variates from continuous distributions when only the density function is given. The algorithm is based on polynomial interpolation of the inverse CDF and Gauss-Lobatto integration. The user can select the required precision which may be close to machine precision for smooth, bounded densities; the necessary tables have moderate size. Our computational experiments with the classical standard distributions (normal, beta, gamma, t-distributions) and with the noncentral chi-square, hyperbolic, generalized hyperbolic and stable distributions showed that our algorithm always reaches the required precision. The setup time is moderate and the marginal execution time is very fast and nearly the same for all these distributions. Thus for the case that large samples with fixed parameters are required the proposed algorithm is the fastest inversion method known. Speed-up factors up to 1000 are obtained when compared to inversion algorithms developed for the specific distributions. Thus our algorithm is especially attractive for the simulation of copulas and for quasi-Monte Carlo applications. / Series: Research Report Series / Department of Statistics and Mathematics
220

Discrepancy-based algorithms for best-subset model selection

Zhang, Tao 01 May 2013 (has links)
The selection of a best-subset regression model from a candidate family is a common problem that arises in many analyses. In best-subset model selection, we consider all possible subsets of regressor variables; thus, numerous candidate models may need to be fit and compared. One of the main challenges of best-subset selection arises from the size of the candidate model family: specifically, the probability of selecting an inappropriate model generally increases as the size of the family increases. For this reason, it is usually difficult to select an optimal model when best-subset selection is attempted based on a moderate to large number of regressor variables. Model selection criteria are often constructed to estimate discrepancy measures used to assess the disparity between each fitted candidate model and the generating model. The Akaike information criterion (AIC) and the corrected AIC (AICc) are designed to estimate the expected Kullback-Leibler (K-L) discrepancy. For best-subset selection, both AIC and AICc are negatively biased, and the use of either criterion will lead to overfitted models. To correct for this bias, we introduce a criterion AICi, which has a penalty term evaluated from Monte Carlo simulation. A multistage model selection procedure AICaps, which utilizes AICi, is proposed for best-subset selection. In the framework of linear regression models, the Gauss discrepancy is another frequently applied measure of proximity between a fitted candidate model and the generating model. Mallows' conceptual predictive statistic (Cp) and the modified Cp (MCp) are designed to estimate the expected Gauss discrepancy. For best-subset selection, Cp and MCp exhibit negative estimation bias. To correct for this bias, we propose a criterion CPSi that again employs a penalty term evaluated from Monte Carlo simulation. We further devise a multistage procedure, CPSaps, which selectively utilizes CPSi. In this thesis, we consider best-subset selection in two different modeling frameworks: linear models and generalized linear models. Extensive simulation studies are compiled to compare the selection behavior of our methods and other traditional model selection criteria. We also apply our methods to a model selection problem in a study of bipolar disorder.

Page generated in 0.0373 seconds