• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 48
  • 13
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 150
  • 27
  • 23
  • 21
  • 21
  • 21
  • 20
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

On the distribution of polynomials having a given number of irreducible factors over finite fields

Datta, Arghya 08 1900 (has links)
Soit q ⩾ 2 une puissance première fixe. L’objectif principal de cette thèse est d’étudier le comportement asymptotique de la fonction arithmétique Π_q(n,k) comptant le nombre de polynômes moniques de degré n et ayant exactement k facteurs irréductibles (avec multiplicité) sur le corps fini F_q. Warlimont et Car ont montré que l’objet Π_q(n,k) est approximativement distribué de Poisson lorsque 1 ⩽ k ⩽ A log n pour une constante A > 0. Plus tard, Hwang a étudié la fonction Π_q(n,k) pour la gamme complète 1 ⩽ k ⩽ n. Nous allons d’abord démontrer une formule asymptotique pour Π_q(n,k) en utilisant une technique analytique classique développée par Sathe et Selberg. Nous reproduirons ensuite une version simplifiée du résultat de Hwang en utilisant la formule de Sathe-Selberg dans le champ des fonctions. Nous comparons également nos résultats avec ceux analogues existants dans le cas des entiers, où l’on étudie tous les nombres naturels jusqu’à x avec exactement k facteurs premiers. En particulier, nous montrons que le nombre de polynômes moniques croît à un taux étonnamment plus élevé lorsque k est un peu plus grand que logn que ce que l’on pourrait supposer en examinant le cas des entiers. Pour présenter le travail ci-dessus, nous commençons d’abord par la théorie analytique des nombres de base dans le contexte des polynômes. Nous introduisons ensuite les fonctions arithmétiques clés qui jouent un rôle majeur dans notre thèse et discutons brièvement des résultats bien connus concernant leur distribution d’un point de vue probabiliste. Enfin, pour comprendre les résultats clés, nous donnons une discussion assez détaillée sur l’analogue de champ de fonction de la formule de Sathe-Selberg, un outil récemment développé par Porrit et utilisons ensuite cet outil pour prouver les résultats revendiqués. / Let q ⩾ 2 be a fixed prime power. The main objective of this thesis is to study the asymptotic behaviour of the arithmetic function Π_q(n,k) counting the number of monic polynomials that are of degree n and have exactly k irreducible factors (with multiplicity) over the finite field F_q. Warlimont and Car showed that the object Π_q(n,k) is approximately Poisson distributed when 1 ⩽ k ⩽ A log n for some constant A > 0. Later Hwang studied the function Π_q(n,k) for the full range 1 ⩽ k ⩽ n. We will first prove an asymptotic formula for Π_q(n,k) using a classical analytic technique developed by Sathe and Selberg. We will then reproduce a simplified version of Hwang’s result using the Sathe-Selberg formula in the function field. We also compare our results with the analogous existing ones in the integer case, where one studies all the natural numbers up to x with exactly k prime factors. In particular, we show that the number of monic polynomials grows at a surprisingly higher rate when k is a little larger than logn than what one would speculate from looking at the integer case. To present the above work, we first start with basic analytic number theory in the context of polynomials. We then introduce the key arithmetic functions that play a major role in our thesis and briefly discuss well-known results concerning their distribution from a probabilistic point of view. Finally, to understand the key results, we give a fairly detailed discussion on the function field analogue of the Sathe-Selberg formula, a tool recently developed by Porrit and subsequently use this tool to prove the claimed results.
142

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Kato, Fernando Hideki 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.
143

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Fernando Hideki Kato 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.
144

Etude d'équations aux dérivées partielles stochastiques / Study on stochastic partial differential equations

Bauzet, Caroline 26 June 2013 (has links)
Cette thèse s’inscrit dans le domaine mathématique de l’analyse des équations aux dérivées partielles (EDP) non-linéaires stochastiques. Nous nous intéressons à des EDP paraboliques et hyperboliques que l’on perturbe stochastiquement au sens d’Itô. Il s’agit d’introduire l’aléatoire via l’ajout d’une intégrale stochastique (intégrale d’Itô) qui peut dépendre ou non de la solution, on parle alors de bruit multiplicatif ou additif. La présence de la variable de probabilité ne nous permet pas d’utiliser tous les outils classiques de l’analyse des EDP. Notre but est d’adapter les techniques connues dans le cadre déterministe aux EDP non linéaires stochastiques en proposant des méthodes alternatives. Les résultats obtenus sont décrits dans les cinq chapitres de cette thèse : Dans le Chapitre I, nous étudions une perturbation stochastique des équations de Barenblatt. En utilisant une semi- discrétisation implicite en temps, nous établissons l’existence et l’unicité d’une solution dans le cas additif, et grâce aux propriétés de la solution nous sommes en mesure d’étendre ce résultat au cas multiplicatif à l’aide d’un théorème de point fixe. Dans le Chapitre II, nous considérons une classe d’équations de type Barenblatt stochastiques dans un cadre abstrait. Il s’agit là d’une généralisation des résultats du Chapitre I. Dans le Chapitre III, nous travaillons sur l’étude du problème de Cauchy pour une loi de conservation stochastique. Nous montrons l’existence d’une solution par une méthode de viscosité artificielle en utilisant des arguments de compacité donnés par la théorie des mesures de Young. L’unicité repose sur une adaptation de la méthode de dédoublement des variables de Kruzhkov.. Dans le Chapitre IV, nous nous intéressons au problème de Dirichlet pour la loi de conservation stochastique étudiée au Chapitre III. Le point remarquable de l’étude repose sur l’utilisation des semi-entropies de Kruzhkov pour montrer l’unicité. Dans le Chapitre V, nous introduisons une méthode de splitting pour proposer une approche numérique du problème étudié au Chapitre IV, suivie de quelques simulations de l’équation de Burgers stochastique dans le cas unidimensionnel. / This thesis deals with the mathematical field of stochastic nonlinear partial differential equations’ analysis. We are interested in parabolic and hyperbolic PDE stochastically perturbed in the Itô sense. We introduce randomness by adding a stochastic integral (Itô integral), which can depend or not on the solution. We thus talk about a multiplicative noise or an additive one. The presence of the random variable does not allow us to apply systematically classical tools of PDE analysis. Our aim is to adapt known techniques of the deterministic setting to nonlinear stochastic PDE analysis by proposing alternative methods. Here are the obtained results : In Chapter I, we investigate on a stochastic perturbation of Barenblatt equations. By using an implicit time discretization, we establish the existence and uniqueness of the solution in the additive case. Thanks to the properties of such a solution, we are able to extend this result to the multiplicative noise using a fixed-point theorem. In Chapter II, we consider a class of stochastic equations of Barenblatt type but in an abstract frame. It is about a generalization of results from Chapter I. In Chapter III, we deal with the study of the Cauchy problem for a stochastic conservation law. We show existence of solution via an artificial viscosity method. The compactness arguments are based on Young measure theory. The uniqueness result is proved by an adaptation of the Kruzhkov doubling variables technique. In Chapter IV, we are interested in the Dirichlet problem for the stochastic conservation law studied in Chapter III. The remarkable point is the use of the Kruzhkov semi-entropies to show the uniqueness of the solution. In Chapter V, we introduce a splitting method to propose a numerical approach of the problem studied in Chapter IV. Then we finish by some simulations of the stochastic Burgers’ equation in the one dimensional case.
145

Compression et inférence des opérateurs intégraux : applications à la restauration d’images dégradées par des flous variables / Approximation and estimation of integral operators : applications to the restoration of images degraded by spatially varying blurs

Escande, Paul 26 September 2016 (has links)
Le problème de restauration d'images dégradées par des flous variables connaît un attrait croissant et touche plusieurs domaines tels que l'astronomie, la vision par ordinateur et la microscopie à feuille de lumière où les images sont de taille un milliard de pixels. Les flous variables peuvent être modélisés par des opérateurs intégraux qui associent à une image nette u, une image floue Hu. Une fois discrétisé pour être appliqué sur des images de N pixels, l'opérateur H peut être vu comme une matrice de taille N x N. Pour les applications visées, la matrice est stockée en mémoire avec un exaoctet. On voit apparaître ici les difficultés liées à ce problème de restauration des images qui sont i) le stockage de ce grand volume de données, ii) les coûts de calculs prohibitifs des produits matrice-vecteur. Ce problème souffre du fléau de la dimension. D'autre part, dans beaucoup d'applications, l'opérateur de flou n'est pas ou que partialement connu. Il y a donc deux problèmes complémentaires mais étroitement liés qui sont l'approximation et l'estimation des opérateurs de flou. Cette thèse a consisté à développer des nouveaux modèles et méthodes numériques permettant de traiter ces problèmes. / The restoration of images degraded by spatially varying blurs is a problem of increasing importance. It is encountered in many applications such as astronomy, computer vision and fluorescence microscopy where images can be of size one billion pixels. Variable blurs can be modelled by linear integral operators H that map a sharp image u to its blurred version Hu. After discretization of the image on a grid of N pixels, H can be viewed as a matrix of size N x N. For targeted applications, matrices is stored with using exabytes on the memory. This simple observation illustrates the difficulties associated to this problem: i) the storage of a huge amount of data, ii) the prohibitive computation costs of matrix-vector products. This problems suffers from the challenging curse of dimensionality. In addition, in many applications, the operator is usually unknown or only partially known. There are therefore two different problems, the approximation and the estimation of blurring operators. They are intricate and have to be addressed with a global overview. Most of the work of this thesis is dedicated to the development of new models and computational methods to address those issues.
146

有序分類下三維列聯表之關係模型探討 / On Association Models for Three-Way Contingency Tables with Ordinal Categories

劉佳鑫, Benny Liu, Chia-Hsin Unknown Date (has links)
本文主要是在探討三個變數所構成之三維列聯表中,兩兩有序類別變數間的關係,而衡量的標準,我們則採用「兩兩變數所構成之二維列聯表中,相鄰兩列與相鄰兩行所求計出的相對成敗比(local odds ratios)」。在三維列聯表的資料架構下,我們可分別就固定某一變數水準之下兩個有序變數彼此間的「條件關係」,以及三個有序類別變數彼此兩兩間的「部分關係」,建構其各自的三維關係模型,並進行參數估計。此外,我們也提供必要的電腦程式,並舉出實例,加以說明。 / In analyzing a three-way contingency table with three ordinal variables, we can use association models suggested in Goodman (1979) to study the association between each pair of ordinal variables. The association was measured in terms of the local odds ratios formed from adjacent rows and adjacent columns of the cross-classification. This article investigates in great details the conditional association models and the partial association models for three-way cross-classifications. In addition, issues on estimating the para-meters in these two kinds of association models are discussed, and computer programs are provided. Some of the applications are illustrated.
147

Circulation submésoéchelle et comportements des prédateurs marins supérieurs : Apport de l'analyse multi-échelles et multi-capteurs

Sudre, J. 20 December 2013 (has links) (PDF)
L'océan est le siège de mouvements complexes à toutes échelles spatiales et temporelles. Au sein d'une circulation moyenne et globale existe une circulation secondaire peuplée de fronts, de méandres, de jets étroits, de tourbillons, que l'on nomme circulation à mésoéchelle. L'observation spatiale permet une description et une évaluation synoptique de cette dynamique à mésoéchelle au moyen de l'altimétrie et la diffusiométrie. Cette évaluation a été le premier objectif de cette thèse et a permis de développer un produit distribué à la communauté scientifique internationale : le produit GEKCO. Cependant la description des processus submésoéchelle à plus fine résolution nécessite l'utilisation de données à super-résolution (couleur de l'eau, température de surface) qui ont la possibilité de représenter toute la complexité d'un océan en régime de turbulence pleinement développée. Une méthode à la croisée de l'océanographie physique et de la "science de la complexité" utilisant la formulation microcanonique de la cascade multiplicative, le produit GEKCO et des images de température de la mer, a fait l'objet de la seconde partie de ce manuscrit. La dynamique océanique étant la clef de voûte de tout le monde marin du vivant, la dernière partie de cette thèse s'est intéressée à l'impact de la circulation à mésoéchelle et à submésoéchelle sur la chaîne trophique marine en se focalisant sur ses deux extrémités. L'étude de la circulation à submésoéchelle a permis de montrer qu'elle joue un rôle prépondérant pour la biomasse marine ; un rôle d'activateur en océan ouvert et un rôle d'inhibiteur dans les systèmes d'upwelling de bord Est. Différentes études sur les trajets de prédateurs marins supérieurs ont démontré la nécessité de prendre en compte la dynamique océanique pour interpréter leur comportement de navigation.
148

Beiträge zur expliziten Fehlerabschätzung im zentralen Grenzwertsatz

Paditz, Ludwig 04 June 2013 (has links) (PDF)
In der Arbeit wird das asymptotische Verhalten von geeignet normierten und zentrierten Summen von Zufallsgrößen untersucht, die entweder unabhängig sind oder im Falle der Abhängigkeit als Martingaldifferenzfolge oder stark multiplikatives System auftreten. Neben der klassischen Summationstheorie werden die Limitierungsverfahren mit einer unendlichen Summationsmatrix oder einer angepaßten Folge von Gewichtsfunktionen betrachtet. Es werden die Methode der charakteristischen Funktionen und besonders die direkte Methode der konjugierten Verteilungsfunktionen weiterentwickelt, um quantitative Aussagen über gleichmäßige und ungleichmäßige Restgliedabschätzungen in zentralen Grenzwertsatz zu beweisen. Die Untersuchungen werden dabei in der Lp-Metrik, 1<p<oo oder p=1 bzw. p=oo, durchgeführt, wobei der Fall p=oo der üblichen sup-Norm entspricht. Darüber hinaus wird im Fall unabhängiger Zufallsgrößen der lokale Grenzwertsatz für Dichten betrachtet. Mittels der elektronischen Datenverarbeitung neue numerische Resultate zu erhalten. Die Arbeit wird abgerundet durch verschiedene Hinweise auf praktische Anwendungen. / In the work the asymptotic behavior of suitably centered and normalized sums of random variables is investigated, which are either independent or occur in the case of dependence as a sequence of martingale differences or a strongly multiplicative system. In addition to the classical theory of summation limiting processes are considered with an infinite summation matrix or an adapted sequence of weighting functions. It will be further developed the method of characteristic functions, and especially the direct method of the conjugate distribution functions to prove quantitative statements about uniform and non-uniform error estimates of the remainder term in central limit theorem. The investigations are realized in the Lp metric, 1 <p <oo or p = 1 or p = oo, where in the case p = oo it is the usual sup-norm. In addition, in the case of independent random variables the local limit theorem for densities is considered. By means of electronic data processing new numerical results are obtained. The work is finished by various references to practical applications.
149

Beiträge zur expliziten Fehlerabschätzung im zentralen Grenzwertsatz

Paditz, Ludwig 27 April 1989 (has links)
In der Arbeit wird das asymptotische Verhalten von geeignet normierten und zentrierten Summen von Zufallsgrößen untersucht, die entweder unabhängig sind oder im Falle der Abhängigkeit als Martingaldifferenzfolge oder stark multiplikatives System auftreten. Neben der klassischen Summationstheorie werden die Limitierungsverfahren mit einer unendlichen Summationsmatrix oder einer angepaßten Folge von Gewichtsfunktionen betrachtet. Es werden die Methode der charakteristischen Funktionen und besonders die direkte Methode der konjugierten Verteilungsfunktionen weiterentwickelt, um quantitative Aussagen über gleichmäßige und ungleichmäßige Restgliedabschätzungen in zentralen Grenzwertsatz zu beweisen. Die Untersuchungen werden dabei in der Lp-Metrik, 1<p<oo oder p=1 bzw. p=oo, durchgeführt, wobei der Fall p=oo der üblichen sup-Norm entspricht. Darüber hinaus wird im Fall unabhängiger Zufallsgrößen der lokale Grenzwertsatz für Dichten betrachtet. Mittels der elektronischen Datenverarbeitung neue numerische Resultate zu erhalten. Die Arbeit wird abgerundet durch verschiedene Hinweise auf praktische Anwendungen. / In the work the asymptotic behavior of suitably centered and normalized sums of random variables is investigated, which are either independent or occur in the case of dependence as a sequence of martingale differences or a strongly multiplicative system. In addition to the classical theory of summation limiting processes are considered with an infinite summation matrix or an adapted sequence of weighting functions. It will be further developed the method of characteristic functions, and especially the direct method of the conjugate distribution functions to prove quantitative statements about uniform and non-uniform error estimates of the remainder term in central limit theorem. The investigations are realized in the Lp metric, 1 <p <oo or p = 1 or p = oo, where in the case p = oo it is the usual sup-norm. In addition, in the case of independent random variables the local limit theorem for densities is considered. By means of electronic data processing new numerical results are obtained. The work is finished by various references to practical applications.
150

Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms

Vestin, Albin, Strandberg, Gustav January 2019 (has links)
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.

Page generated in 0.073 seconds