• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 13
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 19
  • 19
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Random projectors with continuous resolutions of the identity in a finite-dimensional Hilbert space

Vourdas, Apostolos 22 October 2019 (has links)
Yes / Random sets are used to get a continuous partition of the cardinality of the union of many overlapping sets. The formalism uses Möbius transforms and adapts Shapley's methodology in cooperative game theory, into the context of set theory. These ideas are subsequently generalized into the context of finite-dimensional Hilbert spaces. Using random projectors into the subspaces spanned by states from a total set, we construct an infinite number of continuous resolutions of the identity, that involve Hermitian positive semi-definite operators. The simplest one is the diagonal continuous resolution of the identity, and it is used to expand an arbitrary vector in terms of a continuum of components. It is also used to define the function on the 'probabilistic quadrant' , which is analogous to the Wigner function for the harmonic oscillator, on the phase-space plane. Systems with finite-dimensional Hilbert space (which are naturally described with discrete variables) are described here with continuous probabilistic variables. / The full-text of this article will be released for public view at the end of the publisher embargo on 15 Oct 2020. / Research Development Fund Publication Prize Award winner, October 2019.
32

Operadores integrais positivos e espaços de Hilbert de reprodução / Positive integral operators and reproducing kernel Hilbert spaces

Ferreira, José Claudinei 27 July 2010 (has links)
Este trabalho é dedicado ao estudo de propriedades teóricas dos operadores integrais positivos em \'L POT. 2\' (X; u), quando X é um espaço topológico localmente compacto ou primeiro enumerável e u é uma medida estritamente positiva. Damos ênfase à análise de propriedades espectrais relacionadas com extensões do Teorema de Mercer e ao estudo dos espaços de Hilbert de reprodução relacionados. Como aplicação, estudamos o decaimento dos autovalores destes operadores, em um contexto especial. Finalizamos o trabalho com a análise de propriedades de suavidade das funções do espaço de Hilbert de reprodução, quando X é um subconjunto do espaço euclidiano usual e u é a medida de Lebesgue usual de X / In this work we study theoretical properties of positive integral operators on \'L POT. 2\'(X; u), in the case when X is a topological space, either locally compact or first countable, and u is a strictly positive measure. The analysis is directed to spectral properties of the operator which are related to some extensions of Mercer\'s Theorem and to the study of the reproducing kernel Hilbert spaces involved. As applications, we deduce decay rates for the eigenvalues of the operators in a special but relevant case. We also consider smoothness properties for functions in the reproducing kernel Hilbert spaces when X is a subset of the Euclidean space and u is the Lebesgue measure of the space
33

Aspekte unendlichdimensionaler Martingaltheorie und ihre Anwendung in der Theorie der Finanzmärkte

Schöckel, Thomas 19 October 2004 (has links)
Wir modellieren einen Finanzmarkt mit unendlich vielen Wertpapieren als stochastischen Prozeß X in stetiger Zeit mit Werten in einem separablen Hilbertraum H. In diesem Rahmen zeigen wir die Äquivalenz von Vollständigkeit des Marktes und der Eindeutigkeit des äquivalenten Martingalmaßes unter der Bedingung, daß X stetige Pfade besitzt. Weiter zeigen wir, daß (unter gewissen technischen Bedingungen) für X die Abwesenheit von asymptotischer Arbitrage der ersten/zweiten Art (im Sinne von Kabanov/Kramkov) äquivalent zur Absolutstetigkeit des Referenzmaßes zu einem eindeutigen, lokal äquivalenten Martingalmaß ist. Hat X stetige Pfade, so ist die Abwesenheit von allgemeiner asymptotischer Arbitrage äquivalent zur Existenz eines äquivalenten lokalen Martingalmaßes. Außerdem geben wir ein Kriterium für die Existenz einer optionalen Zerlegung von X an. Dies wenden wir auf das Problem der Risikominimierung bei vorgegebener Investitionsobergrenze (effizientes Hedgen (Föllmer/Leukert)) an, um dieses im unendlichdimensionalen Kontext zu behandeln. Außerdem stellen wir eine unendlichdimensionale Erweiterung des Heath-Jarrow-Morton-Modells vor und nutzen den Potentialansatz nach Rodgers, um zwei weitere Zinsstrukturmodelle zu konstruieren. Als Beitrag zur allgemeinen stochastischen Analysis in Hilberträumen beweisen wir eine pfadweise Version der Itoformel für stochastische Prozesse mit stetigen Pfaden in einem separablen Hilbertraum. Daraus läßt sich eine pfadweise Version des Satzes über die Vertauschbarkeit von stochastischem und Lebesgue-Integral ableiten. Außerdem zeigen wir eine Version der Clark-Formel für eine Brownsche Bewegung mit Werten in einem Hilbertraum. / We model a financial market with infinitely many assets as a stochastic process X with values in a separable Hilbert space H. In this setting we show the equivalence of market completeness and the uniqueness of the equivalent martingale measure, if X has continuous paths. Another result for our model is, that under some technical conditions, the absence of asymptotic arbitrage of the first/second kind (in the sense of Kabanov/Kramkov) is equivalent to the absolute continuity of the reference measure to a unique, locally equivalent, martingale measure. If X has continuous paths, the absence of general asymptotic arbitrage is equivalent to the existence of an equivalent local martingale measure. Furthermore, we give a sufficient condition for the existence of the optional decomposition of X. We apply this result to the problem of risk minimization with given upper limit for investion (efficient hedging (Föllmer/Leukert)). This allows us to solve this optimization problem in our infinite dimensional context. Another result is an infinite dimensional extension of the Heath-Jarrow-Morton term structure model. Two further term structure models are constructed, using the Markov potential approach developed by Rodgers. As a contribution to the theory of stochastic analysis in Hilbert spaces, we proof a pathwise version of the Ito formula for stochastic processes with continuous paths in a separable Hilbert space. This leads to a pathwise version of the interchangability theorem for stochastic and Lebesgue integrals. We also show a version of the Clark formula for Hilbert space valued Brownian motion.
34

High-order in time discontinuous Galerkin finite element methods for linear wave equations

Al-Shanfari, Fatima January 2017 (has links)
In this thesis we analyse the high-order in time discontinuous Galerkin nite element method (DGFEM) for second-order in time linear abstract wave equations. Our abstract approximation analysis is a generalisation of the approach introduced by Claes Johnson (in Comput. Methods Appl. Mech. Engrg., 107:117-129, 1993), writing the second order problem as a system of fi rst order problems. We consider abstract spatial (time independent) operators, highorder in time basis functions when discretising in time; we also prove approximation results in case of linear constraints, e.g. non-homogeneous boundary data. We take the two steps approximation approach i.e. using high-order in time DGFEM; the discretisation approach in time introduced by D Schötzau (PhD thesis, Swiss Federal institute of technology, Zürich, 1999) to fi rst obtain the semidiscrete scheme and then conformal spatial discretisation to obtain the fully-discrete formulation. We have shown solvability, unconditional stability and conditional a priori error estimates within our abstract framework for the fully discretized problem. The skew-symmetric spatial forms arising in our abstract framework for the semi- and fully-discrete schemes do not full ll the underlying assumptions in D. Schötzau's work. But the semi-discrete and fully discrete forms satisfy an Inf-sup condition, essential for our proofs; in this sense our approach is also a generalisation of D. Schötzau's work. All estimates are given in a norm in space and time which is weaker than the Hilbert norm belonging to our abstract function spaces, a typical complication in evolution problems. To the best of the author's knowledge, with the approximation approach we used, these stability and a priori error estimates with their abstract structure have not been shown before for the abstract variational formulation used in this thesis. Finally we apply our abstract framework to the acoustic and an elasto-dynamic linear equations with non-homogeneous Dirichlet boundary data.
35

Universalidade e ortogonalidade em espaços de Hilbert de reprodução / Universality and orthogonality in reproducing Kernel Hilbert spaces

Barbosa, Victor Simões 19 February 2013 (has links)
Neste trabalho analisamos o papel das funções layout de um núcleo positivo definido K sobre um espaço topológico de Hausdor E com relação a duas propriedades específicas: a universalidade de K e a ortogonalidade no espaço de Hilbert de reprodução de K a partir de suportes disjuntos. As funções layout sempre existem mas podem não ser únicas. De uma maneira geral, a função layout e uma aplicação que transfere, convenientemente, informações do espaço E para um espaço com produto interno de dimensão alta, onde métodos lineares podem ser usados. Tanto a universalidade quanto a ortogonalidade pressupõem a continuidade do núcleo. O primeiro conceito exige que para cada compacto não vazio X de E, o conjunto de \"seções\" {K(., y) : y \'PERTENCE\' X} seja total no espaço de todas as funções contínuas com domínio X, munido da topologia da convergência uniforme. Um dos resultados principais do trabalho caracteriza a universalidade de um núcleo K através de uma propriedade de universalidade semelhante da função layout. A ortogonalidade a partir de suportes disjuntos almeja então a ortogonalidade de quaisquer duas funções do espaço de Hilbert de reprodução de K quando seus suportes não se intersectam / We analyze the role of feature maps of a positive denite kernel K acting on a Hausdorff topological space E in two specific properties: the universality of K and the orthogonality in the reproducing kernel Hilbert space of K from disjoint supports. Feature maps always exist but may not be unique. A feature map may be interpreted as a kernel based procedure that maps the data from the original input space E into a potentially higher dimensional \"feature space\" in which linear methods may then be used. Both properties, universality and orthogonality from disjoint supports, make sense under continuity of the kernel. Universality of K is equivalent to the fundamentality of {K(. ; y) : y \'IT BELONGS\' X} in the space of all continuous functions on X, with the topology of uniform convergence, for all nonempty compact subsets X of E. One of the main results in this work is a characterization of the universality of K from a similar concept for the feature map. Orthogonality from disjoint supports seeks the orthogonality of any two functions in the reproducing kernel Hilbert space of K when the functions have disjoint supports
36

Operadores integrais positivos e espaços de Hilbert de reprodução / Positive integral operators and reproducing kernel Hilbert spaces

José Claudinei Ferreira 27 July 2010 (has links)
Este trabalho é dedicado ao estudo de propriedades teóricas dos operadores integrais positivos em \'L POT. 2\' (X; u), quando X é um espaço topológico localmente compacto ou primeiro enumerável e u é uma medida estritamente positiva. Damos ênfase à análise de propriedades espectrais relacionadas com extensões do Teorema de Mercer e ao estudo dos espaços de Hilbert de reprodução relacionados. Como aplicação, estudamos o decaimento dos autovalores destes operadores, em um contexto especial. Finalizamos o trabalho com a análise de propriedades de suavidade das funções do espaço de Hilbert de reprodução, quando X é um subconjunto do espaço euclidiano usual e u é a medida de Lebesgue usual de X / In this work we study theoretical properties of positive integral operators on \'L POT. 2\'(X; u), in the case when X is a topological space, either locally compact or first countable, and u is a strictly positive measure. The analysis is directed to spectral properties of the operator which are related to some extensions of Mercer\'s Theorem and to the study of the reproducing kernel Hilbert spaces involved. As applications, we deduce decay rates for the eigenvalues of the operators in a special but relevant case. We also consider smoothness properties for functions in the reproducing kernel Hilbert spaces when X is a subset of the Euclidean space and u is the Lebesgue measure of the space
37

Universalidade e ortogonalidade em espaços de Hilbert de reprodução / Universality and orthogonality in reproducing Kernel Hilbert spaces

Victor Simões Barbosa 19 February 2013 (has links)
Neste trabalho analisamos o papel das funções layout de um núcleo positivo definido K sobre um espaço topológico de Hausdor E com relação a duas propriedades específicas: a universalidade de K e a ortogonalidade no espaço de Hilbert de reprodução de K a partir de suportes disjuntos. As funções layout sempre existem mas podem não ser únicas. De uma maneira geral, a função layout e uma aplicação que transfere, convenientemente, informações do espaço E para um espaço com produto interno de dimensão alta, onde métodos lineares podem ser usados. Tanto a universalidade quanto a ortogonalidade pressupõem a continuidade do núcleo. O primeiro conceito exige que para cada compacto não vazio X de E, o conjunto de \"seções\" {K(., y) : y \'PERTENCE\' X} seja total no espaço de todas as funções contínuas com domínio X, munido da topologia da convergência uniforme. Um dos resultados principais do trabalho caracteriza a universalidade de um núcleo K através de uma propriedade de universalidade semelhante da função layout. A ortogonalidade a partir de suportes disjuntos almeja então a ortogonalidade de quaisquer duas funções do espaço de Hilbert de reprodução de K quando seus suportes não se intersectam / We analyze the role of feature maps of a positive denite kernel K acting on a Hausdorff topological space E in two specific properties: the universality of K and the orthogonality in the reproducing kernel Hilbert space of K from disjoint supports. Feature maps always exist but may not be unique. A feature map may be interpreted as a kernel based procedure that maps the data from the original input space E into a potentially higher dimensional \"feature space\" in which linear methods may then be used. Both properties, universality and orthogonality from disjoint supports, make sense under continuity of the kernel. Universality of K is equivalent to the fundamentality of {K(. ; y) : y \'IT BELONGS\' X} in the space of all continuous functions on X, with the topology of uniform convergence, for all nonempty compact subsets X of E. One of the main results in this work is a characterization of the universality of K from a similar concept for the feature map. Orthogonality from disjoint supports seeks the orthogonality of any two functions in the reproducing kernel Hilbert space of K when the functions have disjoint supports
38

Stochastic approximation in Hilbert spaces / Approximation stochastique dans les espaces de Hilbert

Dieuleveut, Aymeric 28 September 2017 (has links)
Le but de l’apprentissage supervisé est d’inférer des relations entre un phénomène que l’on souhaite prédire et des variables « explicatives ». À cette fin, on dispose d’observations de multiples réalisations du phénomène, à partir desquelles on propose une règle de prédiction. L’émergence récente de sources de données à très grande échelle, tant par le nombre d’observations effectuées (en analyse d’image, par exemple) que par le grand nombre de variables explicatives (en génétique), a fait émerger deux difficultés : d’une part, il devient difficile d’éviter l’écueil du sur-apprentissage lorsque le nombre de variables explicatives est très supérieur au nombre d’observations; d’autre part, l’aspect algorithmique devient déterminant, car la seule résolution d’un système linéaire dans les espaces en jeupeut devenir une difficulté majeure. Des algorithmes issus des méthodes d’approximation stochastique proposent uneréponse simultanée à ces deux difficultés : l’utilisation d’une méthode stochastique réduit drastiquement le coût algorithmique, sans dégrader la qualité de la règle de prédiction proposée, en évitant naturellement le sur-apprentissage. En particulier, le cœur de cette thèse portera sur les méthodes de gradient stochastique. Les très populaires méthodes paramétriques proposent comme prédictions des fonctions linéaires d’un ensemble choisi de variables explicatives. Cependant, ces méthodes aboutissent souvent à une approximation imprécise de la structure statistique sous-jacente. Dans le cadre non-paramétrique, qui est un des thèmes centraux de cette thèse, la restriction aux prédicteurs linéaires est levée. La classe de fonctions dans laquelle le prédicteur est construit dépend elle-même des observations. En pratique, les méthodes non-paramétriques sont cruciales pour diverses applications, en particulier pour l’analyse de données non vectorielles, qui peuvent être associées à un vecteur dans un espace fonctionnel via l’utilisation d’un noyau défini positif. Cela autorise l’utilisation d’algorithmes associés à des données vectorielles, mais exige une compréhension de ces algorithmes dans l’espace non-paramétrique associé : l’espace à noyau reproduisant. Par ailleurs, l’analyse de l’estimation non-paramétrique fournit également un éclairage révélateur sur le cadre paramétrique, lorsque le nombre de prédicteurs surpasse largement le nombre d’observations. La première contribution de cette thèse consiste en une analyse détaillée de l’approximation stochastique dans le cadre non-paramétrique, en particulier dans le cadre des espaces à noyaux reproduisants. Cette analyse permet d’obtenir des taux de convergence optimaux pour l’algorithme de descente de gradient stochastique moyennée. L’analyse proposée s’applique à de nombreux cadres, et une attention particulière est portée à l’utilisation d’hypothèses minimales, ainsi qu’à l’étude des cadres où le nombre d’observations est connu à l’avance, ou peut évoluer. La seconde contribution est de proposer un algorithme, basé sur un principe d’accélération, qui converge à une vitesse optimale, tant du point de vue de l’optimisation que du point de vue statistique. Cela permet, dans le cadre non-paramétrique, d’améliorer la convergence jusqu’au taux optimal, dans certains régimes pour lesquels le premier algorithme analysé restait sous-optimal. Enfin, la troisième contribution de la thèse consiste en l’extension du cadre étudié au delà de la perte des moindres carrés : l’algorithme de descente de gradient stochastiqueest analysé comme une chaine de Markov. Cette approche résulte en une interprétation intuitive, et souligne les différences entre le cadre quadratique et le cadre général. Une méthode simple permettant d’améliorer substantiellement la convergence est également proposée. / The goal of supervised machine learning is to infer relationships between a phenomenon one seeks to predict and “explanatory” variables. To that end, multiple occurrences of the phenomenon are observed, from which a prediction rule is constructed. The last two decades have witnessed the apparition of very large data-sets, both in terms of the number of observations (e.g., in image analysis) and in terms of the number of explanatory variables (e.g., in genetics). This has raised two challenges: first, avoiding the pitfall of over-fitting, especially when the number of explanatory variables is much higher than the number of observations; and second, dealing with the computational constraints, such as when the mere resolution of a linear system becomes a difficulty of its own. Algorithms that take their roots in stochastic approximation methods tackle both of these difficulties simultaneously: these stochastic methods dramatically reduce the computational cost, without degrading the quality of the proposed prediction rule, and they can naturally avoid over-fitting. As a consequence, the core of this thesis will be the study of stochastic gradient methods. The popular parametric methods give predictors which are linear functions of a set ofexplanatory variables. However, they often result in an imprecise approximation of the underlying statistical structure. In the non-parametric setting, which is paramount in this thesis, this restriction is lifted. The class of functions from which the predictor is proposed depends on the observations. In practice, these methods have multiple purposes, and are essential for learning with non-vectorial data, which can be mapped onto a vector in a functional space using a positive definite kernel. This allows to use algorithms designed for vectorial data, but requires the analysis to be made in the non-parametric associated space: the reproducing kernel Hilbert space. Moreover, the analysis of non-parametric regression also sheds some light on the parametric setting when the number of predictors is much larger than the number of observations. The first contribution of this thesis is to provide a detailed analysis of stochastic approximation in the non-parametric setting, precisely in reproducing kernel Hilbert spaces. This analysis proves optimal convergence rates for the averaged stochastic gradient descent algorithm. As we take special care in using minimal assumptions, it applies to numerous situations, and covers both the settings in which the number of observations is known a priori, and situations in which the learning algorithm works in an on-line fashion. The second contribution is an algorithm based on acceleration, which converges at optimal speed, both from the optimization point of view and from the statistical one. In the non-parametric setting, this can improve the convergence rate up to optimality, even inparticular regimes for which the first algorithm remains sub-optimal. Finally, the third contribution of the thesis consists in an extension of the framework beyond the least-square loss. The stochastic gradient descent algorithm is analyzed as a Markov chain. This point of view leads to an intuitive and insightful interpretation, that outlines the differences between the quadratic setting and the more general setting. A simple method resulting in provable improvements in the convergence is then proposed.
39

Contribution à la régression non paramétrique avec un processus erreur d'autocovariance générale et application en pharmacocinétique / Contribution to nonparametric regression estimation with general autocovariance error process and application to pharmacokinetics

Benelmadani, Djihad 18 September 2019 (has links)
Dans cette thèse, nous considérons le modèle de régression avec plusieurs unités expérimentales, où les erreurs forment un processus d'autocovariance dans un cadre générale, c'est-à-dire, un processus du second ordre (stationnaire ou non stationnaire) avec une autocovariance non différentiable le long de la diagonale. Nous sommes intéressés, entre autres, à l'estimation non paramétrique de la fonction de régression de ce modèle.Premièrement, nous considérons l'estimateur classique proposé par Gasser et Müller. Nous étudions ses performances asymptotiques quand le nombre d'unités expérimentales et le nombre d'observations tendent vers l'infini. Pour un échantillonnage régulier, nous améliorons les vitesses de convergence d'ordre supérieur de son biais et de sa variance. Nous montrons aussi sa normalité asymptotique dans le cas des erreurs corrélées.Deuxièmement, nous proposons un nouvel estimateur à noyau pour la fonction de régression, basé sur une propriété de projection. Cet estimateur est construit à travers la fonction d'autocovariance des erreurs et une fonction particulière appartenant à l'Espace de Hilbert à Noyau Autoreproduisant (RKHS) associé à la fonction d'autocovariance. Nous étudions les performances asymptotiques de l'estimateur en utilisant les propriétés de RKHS. Ces propriétés nous permettent d'obtenir la vitesse optimale de convergence de la variance de cet estimateur. Nous prouvons sa normalité asymptotique, et montrons que sa variance est asymptotiquement plus petite que celle de l'estimateur de Gasser et Müller. Nous conduisons une étude de simulation pour confirmer nos résultats théoriques.Troisièmement, nous proposons un nouvel estimateur à noyau pour la fonction de régression. Cet estimateur est construit en utilisant la règle numérique des trapèzes, pour approximer l'estimateur basé sur des données continues. Nous étudions aussi sa performance asymptotique et nous montrons sa normalité asymptotique. En outre, cet estimateur permet d'obtenir le plan d'échantillonnage optimal pour l'estimation de la fonction de régression. Une étude de simulation est conduite afin de tester le comportement de cet estimateur dans un plan d'échantillonnage de taille finie, en terme d'erreur en moyenne quadratique intégrée (IMSE). De plus, nous montrons la réduction dans l'IMSE en utilisant le plan d'échantillonnage optimal au lieu de l'échantillonnage uniforme.Finalement, nous considérons une application de la régression non paramétrique dans le domaine pharmacocinétique. Nous proposons l'utilisation de l'estimateur non paramétrique à noyau pour l'estimation de la fonction de concentration. Nous vérifions son bon comportement par des simulations et une analyse de données réelles. Nous investiguons aussi le problème de l'estimation de l'Aire Sous la Courbe de concentration (AUC), pour lequel nous proposons un nouvel estimateur à noyau, obtenu par l'intégration de l'estimateur à noyau de la fonction de régression. Nous montrons, par une étude de simulation, que le nouvel estimateur est meilleur que l'estimateur classique en terme d'erreur en moyenne quadratique. Le problème crucial de l'obtention d'un plan d'échantillonnage optimale pour l'estimation de l'AUC est discuté en utilisant l'algorithme de recuit simulé généralisé. / In this thesis, we consider the fixed design regression model with repeated measurements, where the errors form a process with general autocovariance function, i.e. a second order process (stationary or nonstationary), with a non-differentiable covariance function along the diagonal. We are interested, among other problems, in the nonparametric estimation of the regression function of this model.We first consider the well-known kernel regression estimator proposed by Gasser and Müller. We study its asymptotic performance when the number of experimental units and the number of observations tend to infinity. For a regular sequence of designs, we improve the higher rates of convergence of the variance and the bias. We also prove the asymptotic normality of this estimator in the case of correlated errors.Second, we propose a new kernel estimator of the regression function based on a projection property. This estimator is constructed through the autocovariance function of the errors, and a specific function belonging to the Reproducing Kernel Hilbert Space (RKHS) associated to the autocovariance function. We study its asymptotic performance using the RKHS properties. These properties allow to obtain the optimal convergence rate of the variance. We also prove its asymptotic normality. We show that this new estimator has a smaller asymptotic variance then the one of Gasser and Müller. A simulation study is conducted to confirm this theoretical result.Third, we propose a new kernel estimator for the regression function. This estimator is constructed through the trapezoidal numerical approximation of the kernel regression estimator based on continuous observations. We study its asymptotic performance, and we prove its asymptotic normality. Moreover, this estimator allow to obtain the asymptotic optimal sampling design for the estimation of the regression function. We run a simulation study to test the performance of the proposed estimator in a finite sample set, where we see its good performance, in terms of Integrated Mean Squared Error (IMSE). In addition, we show the reduction of the IMSE using the optimal sampling design instead of the uniform design in a finite sample set.Finally, we consider an application of the regression function estimation in pharmacokinetics problems. We propose to use the nonparametric kernel methods, for the concentration-time curve estimation, instead of the classical parametric ones. We prove its good performance via simulation study and real data analysis. We also investigate the problem of estimating the Area Under the concentration Curve (AUC), where we introduce a new kernel estimator, obtained by the integration of the regression function estimator. We prove, using a simulation study, that the proposed estimators outperform the classical one in terms of Mean Squared Error. The crucial problem of finding the optimal sampling design for the AUC estimation is investigated using the Generalized Simulating Annealing algorithm.
40

Filtrace stochastických evolučních rovnic / Filtering for Stochastic Evolution Equations

Kubelka, Vít January 2020 (has links)
Filtering for Stochastic Evolution Equations Vít Kubelka Doctoral thesis Abstract Linear filtering problem for infinite-dimensional Gaussian processes is studied, the observation process being finite-dimensional. Integral equations for the filter and for covariance of the error are derived. General results are applied to linear SPDEs driven by Gauss-Volterra process observed at finitely many points of the domain and to delayed SPDEs driven by white noise. Subsequently, the continuous dependence of the filter and observation error on parameters which may be present both in the signal and the obser- vation process is proved. These results are applied to signals governed by stochastic heat equations driven by distributed or pointwise fractional noise. The observation process may be a noisy observation of the signal at given points in the domain, the position of which may depend on the parameter. 1

Page generated in 0.0543 seconds