381 |
Method of trimming PDE surfacesUgail, Hassan January 2006 (has links)
A method for trimming surfaces generated as solutions to Partial Differential Equations
(PDEs) is presented. The work we present here utilises the 2D parameter
space on which the trim curves are defined whose projection on the parametrically
represented PDE surface is then trimmed out. To do this we define the trim curves
to be a set of boundary conditions which enable us to solve a low order elliptic
PDE on the parameter space. The chosen elliptic PDE is solved analytically, even
in the case of a very general complex trim, allowing the design process to be carried
out interactively in real time. To demonstrate the capability for this technique we
discuss a series of examples where trimmed PDE surfaces may be applicable.
|
382 |
Direct guaranteed lower eigenvalue bounds with quasi-optimal adaptive mesh-refinementPuttkammer, Sophie Louise 19 January 2024 (has links)
Garantierte untere Eigenwertschranken (GLB) für elliptische Eigenwertprobleme partieller Differentialgleichungen sind in der Theorie sowie in praktischen Anwendungen relevant. Auf Grund des Rayleigh-Ritz- (oder) min-max-Prinzips berechnen alle konformen Finite-Elemente-Methoden (FEM) garantierte obere Schranken. Ein Postprocessing nichtkonformer Methoden von Carstensen und Gedicke (Math. Comp., 83.290, 2014) sowie Carstensen und Gallistl (Numer. Math., 126.1, 2014) berechnet GLB. In diesen Schranken ist die maximale Netzweite ein globaler Parameter, das kann bei adaptiver Netzverfeinerung zu deutlichen Unterschätzungen führen. In einigen numerischen Beispielen versagt dieses Postprocessing für lokal verfeinerte Netze komplett. Diese Dissertation präsentiert, inspiriert von einer neuen skeletal-Methode von Carstensen, Zhai und Zhang (SIAM J. Numer. Anal., 58.1, 2020), einerseits eine modifizierte hybrid-high-order Methode (m=1) und andererseits ein allgemeines Framework für extra-stabilisierte nichtkonforme Crouzeix-Raviart (m=1) bzw. Morley (m=2) FEM. Diese neuen Methoden berechnen direkte GLB für den m-Laplace-Operator, bei denen eine leicht überprüfbare Bedingung an die maximale Netzweite garantiert, dass der k-te diskrete Eigenwert eine untere Schranke für den k-ten Dirichlet-Eigenwert ist. Diese GLB-Eigenschaft und a priori Konvergenzraten werden für jede Raumdimension etabliert. Der neu entwickelte Ansatz erlaubt adaptive Netzverfeinerung, die für optimale Konvergenzraten auch bei nichtglatten Eigenfunktionen erforderlich ist. Die Überlegenheit der neuen adaptiven FEM wird durch eine Vielzahl repräsentativer numerischer Beispiele illustriert. Für die extra-stabilisierte GLB wird bewiesen, dass sie mit optimalen Raten gegen einen einfachen Eigenwert konvergiert, indem die Axiome der Adaptivität von Carstensen, Feischl, Page und Praetorius (Comput. Math. Appl., 67.6, 2014) sowie Carstensen und Rabus (SIAM J. Numer. Anal., 55.6, 2017) verallgemeinert werden. / Guaranteed lower eigenvalue bounds (GLB) for elliptic eigenvalue problems of partial differential equation are of high relevance in theory and praxis. Due to the Rayleigh-Ritz (or) min-max principle all conforming finite element methods (FEM) provide guaranteed upper eigenvalue bounds. A post-processing for nonconforming FEM of Carstensen and Gedicke (Math. Comp., 83.290, 2014) as well as Carstensen and Gallistl (Numer. Math., 126.1,2014) computes GLB. However, the maximal mesh-size enters as a global parameter in the eigenvalue bound and may cause significant underestimation for adaptive mesh-refinement. There are numerical examples, where this post-processing on locally refined meshes fails completely. Inspired by a recent skeletal method from Carstensen, Zhai, and Zhang (SIAM J. Numer. Anal., 58.1, 2020) this thesis presents on the one hand a modified hybrid high-order method (m=1) and on the other hand a general framework for an extra-stabilized nonconforming Crouzeix-Raviart (m=1) or Morley (m=2) FEM. These novel methods compute direct GLB for the m-Laplace operator in that a specific smallness assumption on the maximal mesh-size guarantees that the computed k-th discrete eigenvalue is a lower bound for the k-th Dirichlet eigenvalue. This GLB property as well as a priori convergence rates are established in any space dimension. The novel ansatz allows for adaptive mesh-refinement necessary to recover optimal convergence rates for non-smooth eigenfunctions. Striking numerical evidence indicates the superiority of the new adaptive eigensolvers. For the extra-stabilized nonconforming methods (a generalization of) known abstract arguments entitled as the axioms of adaptivity from Carstensen, Feischl, Page, and Praetorius (Comput. Math. Appl., 67.6, 2014) as well as Carstensen and Rabus (SIAM J. Numer. Anal., 55.6, 2017) allow to prove the convergence of the GLB towards a simple eigenvalue with optimal rates.
|
383 |
Stopping Times Related to Trading StrategiesAbramov, Vilen 25 April 2008 (has links)
No description available.
|
384 |
Impacts of Ignoring Nested Data Structure in Rasch/IRT Model and Comparison of Different Estimation MethodsChungbaek, Youngyun 06 June 2011 (has links)
This study involves investigating the impacts of ignoring nested data structure in Rasch/1PL item response theory (IRT) model via a two-level and three-level hierarchical generalized linear model (HGLM). Currently, Rasch/IRT models are frequently used in educational and psychometric researches for data obtained from multistage cluster samplings, which are more likely to violate the assumption of independent observations of examinees required by Rasch/IRT models. The violation of the assumption of independent observation, however, is ignored in the current standard practices which apply the standard Rasch/IRT for the large scale testing data. A simulation study (Study Two) was conducted to address this issue of the effects of ignoring nested data structure in Rasch/IRT models under various conditions, following a simulation study (Study One) to compare the performances of three methods, such as Penalized Quasi-Likelihood (PQL), Laplace approximation, and Adaptive Gaussian Quadrature (AGQ), commonly used in HGLM in terms of accuracy and efficiency in estimating parameters.
As expected, PQL tended to produce seriously biased item difficulty estimates and ability variance estimates whereas almost unbiased for Laplace or AGQ for both 2-level and 3-level analysis. As for the root mean squared errors (RMSE), three methods performed without substantive differences for item difficulty estimates and ability variance estimates in both 2-level and 3-level analysis, except for level-2 ability variance estimates in 3-level analysis. Generally, Laplace and AGQ performed similarly well in terms of bias and RMSE of parameter estimates; however, Laplace exhibited a much lower convergence rate than that of AGQ in 3-level analyses.
The results from AGQ, which produced the most accurate and stable results among three computational methods, demonstrated that the theoretical standard errors (SE), i.e., asymptotic information-based SEs, were underestimated by at most 34% when 2-level analyses were used for the data generated from 3-level model, implying that the Type I error rate would be inflated when the nested data structures are ignored in Rasch/IRT models. The underestimated theoretical standard errors were substantively more severe as the true ability variance increased or the number of students within schools increased regardless of test length or the number of schools. / Ph. D.
|
385 |
Un modelo praxeológico para el estudo de la transformada de Laplace en ingeniería mecatrónicaFlores Gallo, Diana Carolina 11 January 2022 (has links)
En este trabajo se busca identificar y analizar las praxeologías de la transformada de
Laplace en la carrera de ingeniería mecatrónica, en la institución de la enseñanza de las
matemáticas (E(M)), a través del curso de Ecuaciones Diferenciales, y en la institución
intermediaria (E(DI)), a través del curso de Control Clásico; de manera que se puedan
establecer conexiones, diferencias y la transposición entre dichas instituciones a través de
la circulación de saberes. Para ello, desarrollamos una metodología cualitativa en dos
etapas: en la primera etapa se hace un estudio epistemológico de la transformada de
Laplace, revisión de la malla curricular del curso de Ecuaciones Diferenciales, descripción
de libros de textos y la identificación de la praxeología en la E(M). En la segunda etapa se
realiza una entrevista a un especialista (ingeniero mecatrónico), descripción de los libros
de textos del curso de Control Clásico, la identificación de la praxeología de la E(DI) y la
conexión entre las praxeologías de la E(M) y la E(DI). Como resultado de esta
investigación se pudo ver la transposición entre la institución productora de saberes (P(M))
y la E(M) a través del uso del factor exponencial, ya que en la P(M) se utiliza dicho factor
para reducir el orden de una ecuación diferencial y en la E(M) ese factor se utiliza para
convertir una EDO en una ecuación algebraica. También se pudo ver la transposición entre
la E(M) y la E(DI) a través de diversas técnicas que son validadas por una tecnología ajena
a la otra institución, como por ejemplo una tarea en la E(DI) para hallar la estabilidad de
un sistema usando transformada de Laplace que se puede resolver utilizando solo Matlab
y es validada por su respectiva tecnología; sin embargo, en la E(M) dicha tarea se valida
a través de las tecnologías propias de dicha institución como las tablas de transformada
de Laplace. De esta manera, se puede identificar la razón de ser de la transformada de
Laplace en estudiantes en formación de ingeniería mecatrónica y sirve para valorar la
utilidad de la transformada de Laplace al resolver tareas que se puedan presentar en su
entorno profesional.
|
386 |
Frequentist-Bayesian Hybrid Tests in Semi-parametric and Non-parametric Models with Low/High-Dimensional CovariateXu, Yangyi 03 December 2014 (has links)
We provide a Frequentist-Bayesian hybrid test statistic in this dissertation for two testing problems. The first one is to design a test for the significant differences between non-parametric functions and the second one is to design a test allowing any departure of predictors of high dimensional X from constant. The implementation is also given in construction of the proposal test statistics for both problems.
For the first testing problem, we consider the statistical difference among massive outcomes or signals to be of interest in many diverse fields including neurophysiology, imaging, engineering, and other related fields. However, such data often have nonlinear system, including to row/column patterns, having non-normal distribution, and other hard-to-identifying internal relationship, which lead to difficulties in testing the significance in difference between them for both unknown relationship and high-dimensionality. In this dissertation, we propose an Adaptive Bayes Sum Test capable of testing the significance between two nonlinear system basing on universal non-parametric mathematical decomposition/smoothing components. Our approach is developed from adapting the Bayes sum test statistic by Hart (2009). Any internal pattern is treated through Fourier transformation. Resampling techniques are applied to construct the empirical distribution of test statistic to reduce the effect of non-normal distribution. A simulation study suggests our approach performs better than the alternative method, the Adaptive Neyman Test by Fan and Lin (1998). The usefulness of our approach is demonstrated with an application in the identification of electronic chips as well as an application to test the change of pattern of precipitations.
For the second testing problem, currently numerous statistical methods have been developed for analyzing high-dimensional data. These methods mainly focus on variable selection approach, but are limited for purpose of testing with high-dimensional data, and often are required to have explicit derivative likelihood functions. In this dissertation, we propose ``Hybrid Omnibus Test'' for high-dimensional data testing purpose with much less requirements. Our Hybrid Omnibus Test is developed under semi-parametric framework where likelihood function is no longer necessary. Our Hybrid Omnibus Test is a version of Freqentist-Bayesian hybrid score-type test for a functional generalized partial linear single index model, which has link being functional of predictors through a generalized partially linear single index. We propose an efficient score based on estimating equation to the mathematical difficulty in likelihood derivation and construct our Hybrid Omnibus Test. We compare our approach with a empirical likelihood ratio test and Bayesian inference based on Bayes factor using simulation study in terms of false positive rate and true positive rate. Our simulation results suggest that our approach outperforms in terms of false positive rate, true positive rate, and computation cost in high-dimensional case and low-dimensional case. The advantage of our approach is also demonstrated by published biological results with application to a genetic pathway data of type II diabetes. / Ph. D.
|
387 |
Time-Varying Coefficient Models for Recurrent EventsLiu, Yi 14 November 2018 (has links)
I have developed time-varying coefficient models for recurrent event data to evaluate the temporal profiles for recurrence rate and covariate effects. There are three major parts in this dissertation. The first two parts propose a mixed Poisson process model with gamma frailties for single type recurrent events. The third part proposes a Bayesian joint model based on multivariate log-normal frailties for multi-type recurrent events. In the first part, I propose an approach based on penalized B-splines to obtain smooth estimation for both time-varying coefficients and the log baseline intensity. An EM algorithm is developed for parameter estimation. One issue with this approach is that the estimating procedure is conditional on smoothing parameters, which have to be selected by cross-validation or optimizing certain performance criterion. The procedure can be computationally demanding with a large number of time-varying coefficients. To achieve objective estimation of smoothing parameters, I propose a mixed-model representation approach for penalized splines. Spline coefficients are treated as random effects and smoothing parameters are to be estimated as variance components. An EM algorithm embedded with penalized quasi-likelihood approximation is developed to estimate the model parameters. The third part proposes a Bayesian joint model with time-varying coefficients for multi-type recurrent events. Bayesian penalized splines are used to estimate time-varying coefficients and the log baseline intensity. One challenge in Bayesian penalized splines is that the smoothness of a spline fit is considerably sensitive to the subjective choice of hyperparameters. I establish a procedure to objectively determine the hyperparameters through a robust prior specification. A Markov chain Monte Carlo procedure based on Metropolis-adjusted Langevin algorithms is developed to sample from the high-dimensional distribution of spline coefficients. The procedure includes a joint sampling scheme to achieve better convergence and mixing properties. Simulation studies in the second and third part have confirmed satisfactory model performance in estimating time-varying coefficients under different curvature and event rate conditions. The models in the second and third part were applied to data from a commercial truck driver naturalistic driving study. The application results reveal that drivers with 7-hours-or-less sleep prior to a shift have a significantly higher intensity after 8 hours of on-duty driving and that their intensity remains higher after taking a break. In addition, the results also show drivers' self-selection on sleep time, total driving hours in a shift, and breaks. These applications provide crucial insight into the impact of sleep time on driving performance for commercial truck drivers and highlights the on-road safety implications of insufficient sleep and breaks while driving. This dissertation provides flexible and robust tools to evaluate the temporal profile of intensity for recurrent events. / PHD / The overall objective of this dissertation is to develop models to evaluate the time-varying profiles for event occurrences and the time-varying effects of risk factors upon event occurrences. There are three major parts in this dissertation. The first two parts are designed for single event type. They are based on approaches such that the whole model is conditional on a certain kind of tuning parameter. The value of this tuning parameter has to be pre-specified by users and is influential to the model results. Instead of pre-specifying the value, I develop an approach to achieve an objective estimate for the optimal value of tuning parameter and obtain model results simultaneously. The third part proposes a model for multi-type events. One challenge is that the model results are considerably sensitive to the subjective choice of hyperparameters. I establish a procedure to objectively determine the hyperparameters. Simulation studies have confirmed satisfactory model performance in estimating the temporal profiles for both event occurrences and effects of risk factors. The models were applied to data from a commercial truck driver naturalistic driving study. The results reveal that drivers with 7-hours-or-less sleep prior to a shift have a significantly higher intensity after 8 hours of on-duty driving and that their driving risk remains higher after taking a break. In addition, the results also show drivers’ self-selection on sleep time, total driving hours in a shift, and breaks. These applications provide crucial insight into the impact of sleep time on driving performance for commercial truck drivers and highlights the on-road safety implications of insufficient sleep and breaks while driving. This dissertation provides flexible and robust tools to evaluate the temporal profile of both event occurrences and effects of risk factors.
|
388 |
Duality theory for p-th power factorable operators and kernel operatorsGaldames Bravo, Orlando Eduardo 29 July 2013 (has links)
El presente trabajo está dedicado al análisis de una clase particular de operadores (lineales
y continuos) entre espacios de Banach de funciones. El objetivo es avanzar en la teoría de
los llamados operadores factorizables a la p-potencia analizando todos los aspectos de la
dualidad. Esta clase de operadores ha demostrado ser de utilidad tanto en la teoría de factorización de operadores sobre espacios de Banach de funciones (teoría de Maurey-Rosenthal)
como en el Análisis Armónico (dominios óptimos de la transformada de Fourier y operadores de convolución). A ¿n de desarrollar esta teoría de dualidad y sus aplicaciones, se
de¿ne y estudia una nueva clase de operadores con propiedades de extensión que involucran al operador y a su adjunto. Ésta es la familia de operadores factorizables a la (p,q)-
potencia, 1 · p,q Ç 1, y pueden caracterizarse mediante un esquema de factorización a
través del espacio de p-potencias del dominio y el dual del espacio de q-potencias del dual
del codominio. También se obtiene una equivalencia mediante un diagrama de factorización a través de espacios L
p
(m) y L
q
(n)
0
, donde m y n son medidas vectoriales adecuadas y
ésta será nuestra principal herramienta. Para esta construcción resultan necesarios algunos
resultados preliminares relativos a las p-potencias de los espacios de Banach de funciones
que intervienen y que también se estudian.
Con estos útiles se dan algunos resultados para caracterizar el rango óptimo ¿el menor
espacio de Banach de funciones en el que puede tomar valores el operador¿ para operadores que van de un espacio de Banach a un espacio de Banach de funciones. Además,
se desarrolla y presenta formalmente la idea de factorización óptima de un operador que
optimiza una factorización previa, en términos del diagrama que debe satisfacer un operador factorizable a su (p,q)-potencia. Todos estos resultados extienden los actuales cálculos del dominio óptimo mediante medidas vectoriales para operadores sobre espacios de
Banach de funciones. Dichos cálculos han dado resultados relevantes en diversas áreas del
análisis matemático mediante una descripción del mayor espacio de Banach de funciones
al cual, operadores relevantes ¿como la transformada de Fourier o el operador de Hardy¿
se pueden extender.
La teoría se aplica para encontrar nuevos resultados en determinados campos: como la
teoría de interpolación de operadores entre espacios de Banach de funciones, los operadores
de núcleo y en particular, la transformada de Laplace. / Galdames Bravo, OE. (2013). Duality theory for p-th power factorable operators and kernel operators [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31523
|
389 |
Wavelets Based on Second Order Linear Time Invariant Systems, Theory and ApplicationsAbuhamdia, Tariq Maysarah 28 April 2017 (has links)
This study introduces new families of wavelets. The first is directly derived from the response of Second Order Underdamped Linear-Time-Invariant (SOULTI) systems, while the second is a generalization of the first to the complex domain and is similar to the Laplace transform kernel function. The first takes the acronym of SOULTI wavelet, while the second is named the Laplace wavelet. The most important criteria for a function or signal to be a wavelet is the ability to recover the original signal back from its continuous wavelet transform. It is shown that it is possible to recover back the original signal once the SOULTI or the Laplace wavelet transform is applied to decompose the signal. It is found that both wavelet transforms satisfy linear differential equations called the reconstructing differential equations, which are closely related to the differential equations that produce the wavelets. The new wavelets can have well defined Time-Frequency resolutions, and they have useful properties; a direct relation between the scale and the frequency, unique transform formulas that can be easily obtained for most elementary signals such as unit step, sinusoids, polynomials, and decaying harmonic signals, and linear relations between the wavelet transform of signals and the wavelet transform of their derivatives and integrals. The defined wavelets are applied to system analysis applications. The new wavelets showed accurate instantaneous frequency identification and modal decomposition of LTI Multi-Degree of Freedom (MDOF) systems and it showed better results than the Short-time Fourier Transform (STFT) and the other harmonic wavelets used in time-frequency analysis. The modal decomposition is applied for modal parameters identification, and the properties of the Laplace and the SOULTI wavelet transforms allows analytical and accurate identification methods. / Ph. D. / This study introduces new families of wavelets (small wave-like functions) derived from the response of Second Order Underdamped (oscillating) Linear-Time-Invariant systems. The first is named the SOULTI wavelets, while the second is named Laplace Wavelets. These functions can be used in a wavelet transform which transfers signals from the time domain to the time-frequency domain. It is shown that it is possible to recover back the original signal once the transform is applied. The new wavelets can have well defined Time-Frequency resolutions. The time-frequency resolution is the multiplication of the time resolution and the frequency resolution. A resolution is the smallest time range or frequency range that carries a feature of the signal. The new wavelets have useful properties; a direct relation between the scale and the frequency, unique transform formulas that can be easily obtained for most elementary signals such as unit step, sinusoids, polynomials, and decaying oscillating signals, and linear relations between the wavelet transform of signals and the wavelet transform of their derivatives and integrals. The defined wavelets are applied to system analysis applications. The new wavelets showed accurate instantaneous frequency identification, and decomposing signals into the basic oscillation frequencies, called the modes of vibration. In addition, the new wavelets are applied to infer the parameters of dynamic systems, and they show better results than the Short-time Fourier Transform (STFT) and the other wavelets used in time-frequency analysis.
|
390 |
Approximation de la distribution a posteriori d'un modèle Gamma-Poisson hiérarchique à effets mixtesNembot Simo, Annick Joëlle 01 1900 (has links)
La méthode que nous présentons pour modéliser des données dites de "comptage" ou données de Poisson est basée sur la
procédure nommée Modélisation multi-niveau et interactive de la régression de Poisson (PRIMM) développée par Christiansen
et Morris (1997). Dans la méthode PRIMM, la régression de Poisson ne comprend que des effets fixes tandis que notre modèle
intègre en plus des effets aléatoires. De même que Christiansen et Morris (1997), le modèle étudié consiste à faire de l'inférence basée sur des approximations analytiques des distributions a posteriori des paramètres, évitant ainsi d'utiliser des méthodes computationnelles comme les méthodes de Monte Carlo par chaînes de Markov (MCMC). Les approximations sont basées sur la méthode de Laplace et la théorie asymptotique liée à l'approximation normale pour les lois a posteriori. L'estimation des paramètres de la régression de Poisson est faite par la maximisation de leur densité a posteriori via l'algorithme de Newton-Raphson. Cette étude détermine également les deux premiers moments a posteriori des paramètres de la loi de Poisson dont la distribution a posteriori de chacun d'eux est approximativement une loi gamma. Des applications sur deux exemples de données ont permis de vérifier que ce modèle peut être considéré dans une certaine mesure comme une généralisation de la méthode PRIMM. En effet, le modèle s'applique aussi bien aux données de Poisson non stratifiées qu'aux données stratifiées; et dans ce dernier cas, il comporte non seulement des effets fixes mais aussi des effets aléatoires liés aux strates. Enfin, le modèle est appliqué aux données relatives à plusieurs types d'effets indésirables observés chez les participants d'un essai clinique impliquant un vaccin quadrivalent contre la rougeole, les oreillons, la rub\'eole et la varicelle. La régression de Poisson comprend l'effet fixe correspondant à la variable traitement/contrôle, ainsi que des effets aléatoires liés aux systèmes biologiques du corps humain auxquels sont attribués les effets indésirables considérés. / We propose a method for analysing count or Poisson data based on the procedure called Poisson Regression Interactive Multilevel Modeling (PRIMM) introduced by Christiansen and Morris (1997). The Poisson regression in the PRIMM method has fixed effects only, whereas our model incorporates random effects. As well as Christiansen and Morris (1997), the model studied aims at doing inference based on adequate analytical approximations of posterior distributions of the parameters. This avoids the use of computationally expensive methods such as Markov chain Monte Carlo (MCMC) methods. The approximations are based on the Laplace's method and asymptotic theory. Estimates of Poisson mixed effects regression parameters are obtained through the maximization of their joint posterior density via the Newton-Raphson algorithm. This study also provides the first two posterior moments of the Poisson parameters involved. The posterior distributon of these parameters is approximated by a gamma distribution. Applications to two datasets show that our model can be somehow considered as a generalization of the PRIMM method since it also allows clustered count data. Finally, the model is applied to data involving many types of adverse events recorded by the participants of a drug clinical trial which involved a quadrivalent vaccine containing measles, mumps, rubella and varicella. The Poisson regression incorporates the fixed effect corresponding to the covariate treatment/control as well as a random effect associated with the biological system of the body affected by the
adverse events.
|
Page generated in 0.0168 seconds