Spelling suggestions: "subject:"square""
1051 |
Simulation and Control at the Boundaries Between Humans and Assistive RobotsWarner, Holly E. January 2019 (has links)
No description available.
|
1052 |
Laser-Induced Breakdown Spectroscopy: Simultaneous Multi-Elemental Analysis and Geological ApplicationsSanghapi, Herve Keng-ne 06 May 2017 (has links)
Under high irradiation, a fourth state of matter named plasma can be obtained. Plasmas emit electromagnetic radiation that can be recorded in the form of spectra for spectroscopic elemental analysis. With the advent of lasers in the 1960s, spectroscopists realized that lasers could be used simultaneously as a source of energy and excitation to create plasmas. The use of a laser to ignite a plasma subsequently led to laser-induced breakdown spectroscopy (LIBS), an optical emission spectroscopy capable of analyzing samples in various states (solids, liquids, gases) with minimal sample preparation, rapid feedback, and endowed with in situ capability. In this dissertation, studies of LIBS for multi-elemental analysis and geological applications are reported. LIBS was applied to cosmetic powders for elemental analysis, screening and classification based on the raw material used. Principal component analysis (PCA) and internal standardization were used. The intensity ratios of Mg/Si and Fe/Si observed in talcum powder show that these two ratios could be used as indicators of the potential presence of asbestos. The feasibility of LIBS for the analysis of gasification slags was investigated and results compared with those of inductively-coupled plasma−optical emission spectrometry (ICP-OES). The limits of detection for Al, Ca, Fe, Si and V were determined. The matrix effect was studied using an internal standard and PLS-R. Apart from V, prediction results were closed to those of ICP-OES with accuracy within 10%. Elemental characterization of outcrop geological samples from the Marcellus Shale Formation was also carried out. The matrix effect was substantially reduced. The limits of detection obtained for Si, Al, Ti, Mg, Ca and C were determined. The relative errors of LIBS measurements are in the range of 1.7 to 12.6%. Gate delay and laser pulse energy, have been investigated in view of quantitative analysis of variation of trace elements in a high-pressure environment. Optimization of these parameters permits obtaining underwater plasma emission of calcium with quantitative results on the order of 30 ppm within a certain limit of increased pressure. Monitoring the variation of the trace elements can predict changes in the chemical composition in carbon sequestration reservoir.
|
1053 |
Modélisation des modèles autorégressifs vectoriels avec variables exogènes et sélection d’indicesOscar, Mylène 05 1900 (has links)
Ce mémoire porte sur l’étude des modèles autorégressifs avec variables exogènes et sélection d’indices. La littérature classique regorge de textes concernant la sélection d’indices dans les modèles autorégressifs. Ces modèles sont particulièrement utiles pour des données macroéconomiques mesurées sur des périodes de temps modérées à longues. Effectivement, la lourde paramétrisation des modèles complets peut souvent être allégée en utilisant la sélection d’indices aboutissant ainsi à des modèles plus parcimonieux. Les modèles à variables exogènes sont très intéressants dans le contexte où il est connu que les variables à l’étude sont affectées par d’autres variables, jouant le rôle de variables explicatives, que l’analyste ne veut pas forcément modéliser. Ce mémoire se propose donc d’étudier les modèles autorégressifs vectoriels avec variables exogènes et sélection d’indices. Ces modèles ont été explorés, entre autres, par Lütkepohl (2005), qui se contente cependant d’esquisser les développements mathématiques. Nous concentrons notre étude sur l’inférence statistique sous des conditions précises, la modélisation ainsi que les prévisions. Notre objectif est de comparer les modèles avec sélection d’indices aux modèles autorégressifs avec variables exogènes complets classiques. Nous désirons déterminer si l’utilisation des modèles avec sélection d’indices est marquée par une différence favorable au niveau du biais et de l’écart-type des estimateurs ainsi qu’au niveau des prévisions de valeurs futures. Nous souhaitons également comparer l’efficacité de la sélection d’indices dans les modèles autorégressifs ayant des variables exogènes à celle dans les modèles autorégressifs. Il est à noter qu’une motivation première dans ce mémoire est l’estimation dans les modèles autorégressifs avec variables exogènes à sous-ensemble d’indices.
Dans le premier chapitre, nous présentons les séries temporelles ainsi que les diverses notions qui y sont rattachées. De plus, nous présentons les modèles linéaires classiques multivariés, les modèles à variables exogènes puis des modèles avec sélection d’indices. Dans le deuxième chapitre, nous exposons le cadre théorique de l’estimation des moindres carrés dans les modèles autorégressifs à sous-ensemble d’indices ainsi que le comportement asymptotique de l’estimateur. Ensuite, nous développons la théorie pour l’estimation des moindres carrés (LS) ainsi que la loi asymptotique des estimateurs pour les modèles autorégressifs avec sélection d’indices (SVAR) puis nous faisons de même pour les modèles
autorégressifs avec variables exogènes et tenant compte de la sélection des indices (SVARX). Spécifiquement, nous établissons la convergence ainsi que la distribution asymptotique pour l’estimateur des moindres carrés d’un processus autorégressif vectoriel à sous-ensemble d’indices et avec variables exogènes. Dans le troisième chapitre, nous appliquons la théorie spécifiée précédemment lors de simulations de Monte Carlo. Nous évaluons de manière empirique les biais et les écarts-types des coefficients trouvés lors de l’estimation ainsi que la proportion de fois que le modèle ajusté correspond au vrai modèle pour différents critères de sélection, tailles échantillonnales et processus générateurs des données. Dans le quatrième chapitre, nous appliquons la théorie élaborée aux chapitres 1 et 2 à un vrai jeu de données provenant du système canadien d’information socioéconomique (CANSIM), constitué de la production mensuelle de fromage mozzarella, cheddar et ricotta au Canada, expliquée par les prix mensuels du lait de bovin non transformé dans les provinces de Québec, d’Ontario et de la Colombie-Britannique pour la période allant de janvier 2003 à juillet 2021. Nous ajustons ces données à un modèle autorégressif avec variables exogènes complet puis à un modèle autorégressif avec variables exogènes et sélection d’indices. Nous comparons ensuite les résultats obtenus avec le modèle complet à ceux obtenus avec le modèle restreint.
Mots-clés : Processus autorégressif à sous-ensemble d’indices, variables exogènes, esti mation des moindres carrés, sélection de modèle, séries chronologiques multivariées, processus
stochastiques, séries chronologiques. / This Master’s Thesis focuses on the study of subset autoregressive models with exoge nous variables. Many texts from the classical literature deal with the selection of indexes in autoregressive models. These models are particularly useful for macroeconomic data measured over moderate to long periods of time. Indeed, the heavy parameterization of full models can often be simplified by using the selection of indexes, thus resulting in more parsimonious models. Models with exogenous variables are very interesting in the context where it is known that the variables under study are affected by other variables, playing the role of explanatory variables, not necessarily modeled by the analyst. This Master’s
Thesis therefore proposes to study vector subset autoregressive models with exogenous variables. These models have been explored, among others, by Lütkepohl (2005), who merely sketches proofs of the statistical properties. We focus our study on statistical inference under precise conditions, modeling and forecasting for these models. Our goal is to compare
restricted models to full classical autoregressive models with exogenous variables. We want to determine whether the use of restricted models is marked by a favorable difference in the bias and standard deviation properties of the estimators as well as in forecasting future values. We also compare the efficiency of index selection in autoregressive models with exogenous variables to that in autoregressive models. It should be noted that a primary motivation in this Master’s Thesis is the estimation in subset autoregressive models with exogenous variables.
In the first chapter, we present time series as well as the various concepts which are attached to them. In addition, we present the classical multivariate linear models, models with exogenous variables and then we present subset models. In the second chapter, we present the theoretical framework for least squares estimation in subset autoregressive models as well as the asymptotic behavior of the estimator. Then, we develop the theory for the estimation of least squares (LS) as well as the asymptotic distribution of the estimators for the subset autoregressive models (SVAR), and we do the same for the subset autoregressive models with exogenous variables (SVARX). Specifically, we establish the convergence as well as the asymptotic distribution for the least squares estimator of a subset autoregressive process with exogenous variables. In the third chapter, we apply the theory specified above in Monte Carlo simulations. We evaluate empirically the biases
and the standard deviations of the coefficients found during the estimation as well as the proportion of times that the adjusted model matches the true model for different selection criteria, sample size and data generating processes. In the fourth chapter, we apply the theory developed in chapters 1 and 2 to a real dataset from the Canadian Socio-Economic
Information System (CANSIM) consisting of the monthly production of mozzarella, cheddar and ricotta cheese in Canada, explained by the monthly prices of unprocessed bovine milk in the provinces of Quebec, Ontario and British Columbia from January 2003 to July 2021. We fit these data with a full autoregressive model with exogenous variables and then to a
subset autoregressive model with exogenous variables. Afterwards, we compare the results obtained with the complete model to those obtained with the subset model.
Keywords : Subset autoregressive process, exogenous variables, least squares estimation,
model selection, multivariate time series, stochastic process, time series.
|
1054 |
Accuracy and Reproducibility of Laboratory Diffuse Reflectance Measurements with Portable VNIR and MIR Spectrometers for Predictive Soil Organic Carbon ModelingSemella, Sebastian, Hutengs, Christopher, Seidel, Michael, Ulrich, Mathias, Schneider, Birgit, Ortner, Malte, Thiele-Bruhn, Sören, Ludwig, Bernard, Vohland, Michael 09 June 2023 (has links)
Soil spectroscopy in the visible-to-near infrared (VNIR) and mid-infrared (MIR) is a cost-effective method to determine the soil organic carbon content (SOC) based on predictive spectral models calibrated to analytical-determined SOC reference data. The degree to which uncertainty in reference data and spectral measurements contributes to the estimated accuracy of VNIR and MIR predictions, however, is rarely addressed and remains unclear, in particular for current handheld MIR spectrometers. We thus evaluated the reproducibility of both the spectral reflectance measurements with portable VNIR and MIR spectrometers and the analytical dry combustion SOC reference method, with the aim to assess how varying spectral inputs and reference values impact the calibration and validation of predictive VNIR and MIR models. Soil reflectance spectra and SOC were measured in triplicate, the latter by different laboratories, for a set of 75 finely ground soil samples covering a wide range of parent materials and SOC contents. Predictive partial least-squares regression (PLSR) models were evaluated in a repeated, nested cross-validation approach with systematically varied spectral inputs and reference data, respectively. We found that SOC predictions from both VNIR and MIR spectra were equally highly reproducible on average and similar to the dry combustion method, but MIR spectra were more robust to calibration sample variation. The contributions of spectral variation (ΔRMSE < 0.4 g·kg−1) and reference SOC uncertainty (ΔRMSE < 0.3 g·kg−1) to spectral modeling errors were small compared to the difference between the VNIR and MIR spectral ranges (ΔRMSE ~1.4 g·kg−1 in favor of MIR). For reference SOC, uncertainty was limited to the case of biased reference data appearing in either the calibration or validation. Given better predictive accuracy, comparable spectral reproducibility and greater robustness against calibration sample selection, the portable MIR spectrometer was considered overall superior to the VNIR instrument for SOC analysis. Our results further indicate that random errors in SOC reference values are effectively compensated for during model calibration, while biased SOC calibration data propagates errors into model predictions. Reference data uncertainty is thus more likely to negatively impact the estimated validation accuracy in soil spectroscopy studies where archived data, e.g., from soil spectral libraries, are used for model building, but it should be negligible otherwise.
|
1055 |
Computational Analysis of Flow Cytometry DataIrvine, Allison W. 12 July 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The objective of this thesis is to compare automated methods for performing analysis of flow cytometry data. Flow cytometry is an important and efficient tool for analyzing the characteristics of cells. It is used in several fields, including immunology, pathology, marine biology, and molecular biology. Flow cytometry measures light scatter from cells and fluorescent emission from dyes which are attached to cells. There are two main tasks that must be performed. The first is the adjustment of measured fluorescence from the cells to correct for the overlap of the spectra of the fluorescent markers used to characterize a cell’s chemical characteristics. The second is to use the amount of markers present in each cell to identify its phenotype. Several methods are compared to perform these tasks. The Unconstrained Least Squares, Orthogonal Subspace Projection, Fully Constrained Least Squares and Fully Constrained One Norm methods are used to perform compensation and compared. The fully constrained least squares method of compensation gives the overall best results in terms of accuracy and running time. Spectral Clustering, Gaussian Mixture Modeling, Naive Bayes classification, Support Vector Machine and Expectation Maximization using a gaussian mixture model are used to classify cells based on the amounts of dyes present in each cell. The generative models created by the Naive Bayes and Gaussian mixture modeling methods performed classification of cells most accurately. These supervised methods may be the most useful when online classification is necessary, such as in cell sorting applications of flow cytometers. Unsupervised methods may be used to completely replace manual analysis when no training data is given. Expectation Maximization combined with a cluster merging post-processing step gives the best results of the unsupervised methods considered.
|
1056 |
Détection de l’invalidité et estimation d’un effet causal en présence d’instruments invalides dans un contexte de randomisation mendélienneBoucher-Roy, David 08 1900 (has links)
La randomisation mendélienne est une méthode d’instrumentation utilisant des instruments
de nature génétique afin d’estimer, via par exemple la régression des moindres
carrés en deux étapes, une relation de causalité entre un facteur d’exposition et une réponse
lorsque celle-ci est confondue par une ou plusieurs variables de confusion non mesurées. La
randomisation mendélienne est en mesure de gérer le biais de confusion à condition que les
instruments utilisés soient valides, c’est-à-dire qu’ils respectent trois hypothèses clés. On
peut généralement se convaincre que deux des trois hypothèses sont satisfaites alors qu’un
phénomène génétique, la pléiotropie, peut parfois rendre la troisième hypothèse invalide.
En présence d’invalidité, l’estimation de l’effet causal de l’exposition sur la réponse peut
être sévèrement biaisée. Afin d’évaluer la potentielle présence d’invalidité lorsqu’un seul
instrument est utilisé, Glymour et al. (2012) ont proposé une méthode qu’on dénomme ici
l’approche de la différence simple qui utilise le signe de la différence entre l’estimateur des
moindres carrés ordinaires de la réponse sur l’exposition et l’estimateur des moindres carrés
en deux étapes calculé à partir de l’instrument pour juger de l’invalidité de l’instrument. Ce
mémoire introduit trois méthodes qui s’inspirent de cette approche, mais qui sont applicables
à la randomisation mendélienne à instruments multiples. D’abord, on introduit l’approche
de la différence globale, une simple généralisation de l’approche de la différence simple au cas
des instruments multiples qui a comme objectif de détecter si un ou plusieurs instruments
utilisés sont invalides. Ensuite, on introduit les approches des différences individuelles et des
différences groupées, deux méthodes qui généralisent les outils de détection de l’invalidité
de l’approche de la différence simple afin d’identifier des instruments potentiellement
problématiques et proposent une nouvelle estimation de l’effet causal de l’exposition sur la
réponse. L’évaluation des méthodes passe par une étude théorique de l’impact de l’invalidité
sur la convergence des estimateurs des moindres carrés ordinaires et des moindres carrés
en deux étapes et une simulation qui compare la précision des estimateurs résultant des
différentes méthodes et leur capacité à détecter l’invalidité des instruments. / Mendelian randomization is an instrumentation method that uses genetic instruments
to estimate, via two-stage least squares regression for example, a causal relationship
between an exposure and an outcome when the relationship is confounded by one or more
unmeasured confounders. Mendelian randomization can handle confounding bias provided
that the instruments are valid, i.e., that they meet three key assumptions. While two of
the three assumptions can usually be satisfied, the third assumption is often invalidated
by a genetic phenomenon called pleiotropy. In the presence of invalid instruments, the
estimate of the causal effect of exposure on the outcome may be severely biased. To assess
the potential presence of an invalid instrument in single-instrument studies, Glymour et
al. (2012) proposed a method, hereinafter referred to as the simple difference approach,
which uses the sign of the difference between the ordinary least squares estimator of the
outcome on the exposure and the two-stage least squares estimator calculated using the
instrument. Based on this approach, we introduce three methods applicable to Mendelian
randomization with multiple instruments. The first method is the global difference approach
and corresponds to a simple generalization of the simple difference approach to the case of
multiple instruments that aims to detect whether one or more instruments are invalid. Next,
we introduce the individual differences and the grouped differences approaches, two methods
that generalize the simple difference approach to identify potentially invalid instruments
and provide new estimates of the causal effect of the exposure on the outcome. The methods
are evaluated using a theoretical investigation of the impact that invalid instruments have
on the convergence of the ordinary least squares and two-stage least squares estimators as
well as with a simulation study that compares the accuracy of the respective estimators and
the ability of the corresponding methods to detect invalid instruments.
|
1057 |
Human Behaviour & Urban Squares : A Public Life Study of Kungsträdgården and Sergels Torg / Människor & Stadstorg : En stadslivsstudie av Kungsträdgården och Sergels TorgMattsson, Johan January 2019 (has links)
Some public squares experience large amounts of human activity and some experience very little, even though external conditions between them create comparable opportunities for public life. The field of public life studies observes the human activity of public spaces and presents principles that predicts human public behaviour to gain a better understanding for what elements of space people are attracted to. The human staying activity at two central public square in Stockholm – Kungsträdgården and Sergels Torg – was studied with the methodology of public life studies as outlined in Gehl & Svarre (2013) How to Study Public Life. A stationary activity mapping was performed for the two squares where female, male, sitting and standing activity was registered. The result show that Kungsträdgården attracts more than twice the staying activity as Sergels Torg, and that the two squares are mirror images of each other in terms of gender and activity proportions, with Kungsträdgården being predominately female and sitting and Sergels Torg male and standing. The principles, theories, previous observations and hypotheses from a selection of the most seminal works within the public life studies field frame the seven themes used to analyse the human stationary activity at the two squares; Sitting, Standing, Thermal Comfort, Psychological Comfort, Sensory Comfort, Aesthetics and Human Interaction.
|
1058 |
Federal Funding and the Rise of University Tuition CostsKizzort, Megan 01 December 2013 (has links)
Access to education is a central part of federal higher education policy, and federal grant and loan programs are in place to make college degrees more attainable for students. However, there is still controversy about whether there are unintended consequences of implementing and maintaining these programs, and whether they are effectively achieving the goal of increased accessibility. In order to answer questions about whether three specific types of federal aid cause higher tuition rates and whether these programs increase graduation rates, four ordinary least squares regression models were estimated. They include changes in both in-state and out-of-state tuition sticker prices, graduation rates, as well as changes in three types of federal aid, and other variables indicative of the value of a degree for four-year public universities in Arizona, California, Georgia, and Florida for years 2001-2011. The regressions indicate a positive effect of Pell Grants on in-state and out-of-state tuition and fees, a positive effect of disbursed subsidized federal loans on the change in number of degrees awarded, and a positive effect of Pell Grants on graduation rates.
|
1059 |
Data Driven Modeling for Aerodynamic Coefficients / Datadriven Modellering av Aerodynamiska KoefficienterJonsäll, Erik, Mattsson, Emma January 2023 (has links)
Accurately modeling aerodynamic forces and moments are crucial for understanding thebehavior of an aircraft when performing various maneuvers at different flight conditions.However, this task is challenging due to complex nonlinear dependencies on manydifferent parameters. Currently, Computational Fluid Dynamics (CFD), wind tunnel,and flight tests are the most common methods used to gather information about thecoefficients, which are both costly and time–consuming. Consequently, great efforts aremade to find alternative methods such as machine learning. This thesis focus on finding machine learning models that can model the static and thedynamic aerodynamics coefficients for lift, drag, and pitching moment. Seven machinelearning models for static estimation were trained on data from CFD simulations.The main focus was on dynamic aerodynamics since these are more difficult toestimate. Here two machine learning models were implemented, Long Short–TermMemory (LSTM) and Gaussian Process Regression (GPR), as well as the ordinaryleast squares. These models were trained on data generated from simulated flighttrajectories of longitudinal movements. The results of the study showed that it was possible to model the static coefficients withlimited data and still get high accuracy. There was no machine learning model thatperformed best for all three coefficients or with respect to the size of the training data.The Support vector regression was the best for the drag coefficients, while there wasno clear best model for the lift and moment. For the dynamic coefficients, the ordinaryleast squares performed better than expected and even better than LSTM and GPR forsome flight trajectories. The Gaussian process regression produced better results whenestimating a known trajectory, while the LSTM was better when predicting values ofa flight trajectory not used to train the models. / Att noggrant modellera aerodynamiska krafter och moment är avgörande för att förståett flygplans beteende när man utför olika manövrar vid olika flygförhållanden. Dennauppgift är dock utmanande på grund av ett komplext olinjärt beroende av många olikaparametrar. I nuläget är beräkningsströmningsdynamik (CFD), vindtunneltestningoch flygtestning de vanligaste metoderna för att kunna modellera de aerodynamiskakoefficienterna, men de är både kostsamma och tidskrävande. Följaktligen görs storaansträngningar för att hitta alternativa metoder, till exempel maskininlärning. Detta examensarbete fokuserar på att hitta maskininlärningmodeller som kanmodellera de statiska och de dynamiska aerodynamiska koefficienterna för lyftkraft,luftmotstånd och stigningsmoment. Sju olika maskininlärningsmodeller för destatiska koefficienterna tränades på data från CFD–simuleringar. Huvudfokus lågpå den dynamiska koefficienterna, eftersom dessa är svårare att modellera. Härimplementerades två maskininlärningsmodeller, Long Short–Term Memory (LSTM)och Gaussian Process Regression (GPR), samt minstakvadratmetoden. Dessa modellertränades på data skapad från flygbanesimuleringar av longitudinella rörelser. Resultaten av studien visade att det är möjligt att modellera de statiskakoefficienterna med begränsad data och ändå få en hög noggrannhet. Ingen avde testade maskininslärningsmodelerna var tydligt bäst för alla koefficienterna ellermed hänsyn till mängden träningsdata. Support vector regression var bäst förluftmotstånds koefficienterna, men vilken modell som var bäst för lyftkraften ochstigningsmomentet var inte lika tydligt. För de dynamiska koefficienterna presterademinstakvadratmetoden bättre än förväntat och för vissa signaler även bättre än LSTMoch GPR. GPR gav bättre resultat när man uppskattade koefficienterna för enflygbanan man tränat modellen på, medan LSTM var bättre på att förutspå värdenaför en flybana man inte hade tränat modellen på.
|
1060 |
Generalized quantile regressionGuo, Mengmeng 22 August 2012 (has links)
Die generalisierte Quantilregression, einschließlich der Sonderfälle bedingter Quantile und Expektile, ist insbesondere dann eine nützliche Alternative zum bedingten Mittel bei der Charakterisierung einer bedingten Wahrscheinlichkeitsverteilung, wenn das Hauptinteresse in den Tails der Verteilung liegt. Wir bezeichnen mit v_n(x) den Kerndichteschätzer der Expektilkurve und zeigen die stark gleichmßige Konsistenzrate von v-n(x) unter allgemeinen Bedingungen. Unter Zuhilfenahme von Extremwerttheorie und starken Approximationen der empirischen Prozesse betrachten wir die asymptotischen maximalen Abweichungen sup06x61 |v_n(x) − v(x)|. Nach Vorbild der asymptotischen Theorie konstruieren wir simultane Konfidenzb änder um die geschätzte Expektilfunktion. Wir entwickeln einen funktionalen Datenanalyseansatz um eine Familie von generalisierten Quantilregressionen gemeinsam zu schätzen. Dabei gehen wir in unserem Ansatz davon aus, dass die generalisierten Quantile einige gemeinsame Merkmale teilen, welche durch eine geringe Anzahl von Hauptkomponenten zusammengefasst werden können. Die Hauptkomponenten sind als Splinefunktionen modelliert und werden durch Minimierung eines penalisierten asymmetrischen Verlustmaßes gesch¨atzt. Zur Berechnung wird ein iterativ gewichteter Kleinste-Quadrate-Algorithmus entwickelt. Während die separate Schätzung von individuell generalisierten Quantilregressionen normalerweise unter großer Variablit¨at durch fehlende Daten leidet, verbessert unser Ansatz der gemeinsamen Schätzung die Effizienz signifikant. Dies haben wir in einer Simulationsstudie demonstriert. Unsere vorgeschlagene Methode haben wir auf einen Datensatz von 150 Wetterstationen in China angewendet, um die generalisierten Quantilkurven der Volatilität der Temperatur von diesen Stationen zu erhalten / Generalized quantile regressions, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We denote $v_n(x)$ as the kernel smoothing estimator of the expectile curves. We prove the strong uniform consistency rate of $v_{n}(x)$ under general conditions. Moreover, using strong approximations of the empirical process and extreme value theory, we consider the asymptotic maximal deviation $\sup_{ 0 \leqslant x \leqslant 1 }|v_n(x)-v(x)|$. According to the asymptotic theory, we construct simultaneous confidence bands around the estimated expectile function. We develop a functional data analysis approach to jointly estimate a family of generalized quantile regressions. Our approach assumes that the generalized quantiles share some common features that can be summarized by a small number of principal components functions. The principal components are modeled as spline functions and are estimated by minimizing a penalized asymmetric loss measure. An iteratively reweighted least squares algorithm is developed for computation. While separate estimation of individual generalized quantile regressions usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 150 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations
|
Page generated in 0.055 seconds